I1005 16:42:24.376206 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1005 16:42:24.448151 7 e2e.go:129] Starting e2e run "e5380171-1611-4037-975e-f9b0a62834a8" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1601916143 - Will randomize all specs Will run 303 of 5232 specs Oct 5 16:42:24.505: INFO: >>> kubeConfig: /root/.kube/config Oct 5 16:42:24.510: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 5 16:42:24.524: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 5 16:42:24.553: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 5 16:42:24.553: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 5 16:42:24.553: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 5 16:42:24.559: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 5 16:42:24.559: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 5 16:42:24.559: INFO: e2e test version: v1.19.3-rc.0 Oct 5 16:42:24.560: INFO: kube-apiserver version: v1.19.0 Oct 5 16:42:24.560: INFO: >>> kubeConfig: /root/.kube/config Oct 5 16:42:24.563: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:42:24.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Oct 5 16:42:24.637: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-306ddd75-d745-4137-bb25-3ca5a5231ac8 STEP: Creating a pod to test consume configMaps Oct 5 16:42:24.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272" in namespace "configmap-3800" to be "Succeeded or Failed" Oct 5 16:42:24.666: INFO: Pod "pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10449ms Oct 5 16:42:26.863: INFO: Pod "pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199221319s Oct 5 16:42:28.867: INFO: Pod "pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203672374s STEP: Saw pod success Oct 5 16:42:28.867: INFO: Pod "pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272" satisfied condition "Succeeded or Failed" Oct 5 16:42:28.870: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272 container configmap-volume-test: STEP: delete the pod Oct 5 16:42:29.101: INFO: Waiting for pod pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272 to disappear Oct 5 16:42:29.133: INFO: Pod pod-configmaps-d0206d25-d03d-4fb8-9d10-038d498e5272 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:42:29.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3800" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:42:29.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1683 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 16:42:29.345: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 16:42:29.475: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 16:42:31.567: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 16:42:33.494: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 16:42:35.479: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 16:42:37.479: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 16:42:39.480: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 16:42:41.866: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 16:42:43.519: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 16:42:45.525: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 16:42:45.530: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 5 16:42:47.534: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 5 16:42:49.534: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 5 16:42:51.872: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 16:42:56.508: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.125 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 16:42:56.508: INFO: >>> kubeConfig: /root/.kube/config I1005 16:42:56.548232 7 log.go:181] (0xc0034e28f0) (0xc003656aa0) Create stream I1005 16:42:56.548259 7 log.go:181] (0xc0034e28f0) (0xc003656aa0) Stream added, broadcasting: 1 I1005 16:42:56.551243 7 log.go:181] (0xc0034e28f0) Reply frame received for 1 I1005 16:42:56.551293 7 log.go:181] (0xc0034e28f0) (0xc001ffb400) Create stream I1005 16:42:56.551314 7 log.go:181] (0xc0034e28f0) (0xc001ffb400) Stream added, broadcasting: 3 I1005 16:42:56.552130 7 log.go:181] (0xc0034e28f0) Reply frame received for 3 I1005 16:42:56.552167 7 log.go:181] (0xc0034e28f0) (0xc0035d10e0) Create stream I1005 16:42:56.552178 7 log.go:181] (0xc0034e28f0) (0xc0035d10e0) Stream added, broadcasting: 5 I1005 16:42:56.553500 7 log.go:181] (0xc0034e28f0) Reply frame received for 5 I1005 16:42:57.631178 7 log.go:181] (0xc0034e28f0) Data frame received for 3 I1005 16:42:57.631213 7 log.go:181] (0xc001ffb400) (3) Data frame handling I1005 16:42:57.631235 7 log.go:181] (0xc001ffb400) (3) Data frame sent I1005 16:42:57.631262 7 log.go:181] (0xc0034e28f0) Data frame received for 5 I1005 16:42:57.631278 7 log.go:181] (0xc0035d10e0) (5) Data frame handling I1005 16:42:57.631362 7 log.go:181] (0xc0034e28f0) Data frame received for 3 I1005 16:42:57.631381 7 log.go:181] (0xc001ffb400) (3) Data frame handling I1005 16:42:57.632670 7 log.go:181] (0xc0034e28f0) Data frame received for 1 I1005 16:42:57.632688 7 log.go:181] (0xc003656aa0) (1) Data frame handling I1005 16:42:57.632699 7 log.go:181] (0xc003656aa0) (1) Data frame sent I1005 16:42:57.632715 7 log.go:181] (0xc0034e28f0) (0xc003656aa0) Stream removed, broadcasting: 1 I1005 16:42:57.632736 7 log.go:181] (0xc0034e28f0) Go away received I1005 16:42:57.633096 7 log.go:181] (0xc0034e28f0) (0xc003656aa0) Stream removed, broadcasting: 1 I1005 16:42:57.633114 7 log.go:181] (0xc0034e28f0) (0xc001ffb400) Stream removed, broadcasting: 3 I1005 16:42:57.633123 7 log.go:181] (0xc0034e28f0) (0xc0035d10e0) Stream removed, broadcasting: 5 Oct 5 16:42:57.633: INFO: Found all expected endpoints: [netserver-0] Oct 5 16:42:57.656: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.114 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 16:42:57.656: INFO: >>> kubeConfig: /root/.kube/config I1005 16:42:57.703784 7 log.go:181] (0xc002d2d810) (0xc003757900) Create stream I1005 16:42:57.703821 7 log.go:181] (0xc002d2d810) (0xc003757900) Stream added, broadcasting: 1 I1005 16:42:57.708907 7 log.go:181] (0xc002d2d810) Reply frame received for 1 I1005 16:42:57.708984 7 log.go:181] (0xc002d2d810) (0xc003f04000) Create stream I1005 16:42:57.709014 7 log.go:181] (0xc002d2d810) (0xc003f04000) Stream added, broadcasting: 3 I1005 16:42:57.709951 7 log.go:181] (0xc002d2d810) Reply frame received for 3 I1005 16:42:57.710215 7 log.go:181] (0xc002d2d810) (0xc0023cc000) Create stream I1005 16:42:57.710294 7 log.go:181] (0xc002d2d810) (0xc0023cc000) Stream added, broadcasting: 5 I1005 16:42:57.711856 7 log.go:181] (0xc002d2d810) Reply frame received for 5 I1005 16:42:58.803216 7 log.go:181] (0xc002d2d810) Data frame received for 3 I1005 16:42:58.803241 7 log.go:181] (0xc003f04000) (3) Data frame handling I1005 16:42:58.803262 7 log.go:181] (0xc003f04000) (3) Data frame sent I1005 16:42:58.803444 7 log.go:181] (0xc002d2d810) Data frame received for 3 I1005 16:42:58.803464 7 log.go:181] (0xc003f04000) (3) Data frame handling I1005 16:42:58.804300 7 log.go:181] (0xc002d2d810) Data frame received for 5 I1005 16:42:58.804329 7 log.go:181] (0xc0023cc000) (5) Data frame handling I1005 16:42:58.805423 7 log.go:181] (0xc002d2d810) Data frame received for 1 I1005 16:42:58.805446 7 log.go:181] (0xc003757900) (1) Data frame handling I1005 16:42:58.805464 7 log.go:181] (0xc003757900) (1) Data frame sent I1005 16:42:58.805480 7 log.go:181] (0xc002d2d810) (0xc003757900) Stream removed, broadcasting: 1 I1005 16:42:58.805496 7 log.go:181] (0xc002d2d810) Go away received I1005 16:42:58.805661 7 log.go:181] (0xc002d2d810) (0xc003757900) Stream removed, broadcasting: 1 I1005 16:42:58.805689 7 log.go:181] (0xc002d2d810) (0xc003f04000) Stream removed, broadcasting: 3 I1005 16:42:58.805696 7 log.go:181] (0xc002d2d810) (0xc0023cc000) Stream removed, broadcasting: 5 Oct 5 16:42:58.805: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:42:58.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1683" for this suite. • [SLOW TEST:29.934 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:42:59.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Oct 5 16:42:59.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f -' Oct 5 16:43:04.417: INFO: stderr: "" Oct 5 16:43:04.417: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 5 16:43:04.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config diff -f -' Oct 5 16:43:04.955: INFO: rc: 1 Oct 5 16:43:04.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete -f -' Oct 5 16:43:05.100: INFO: stderr: "" Oct 5 16:43:05.100: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:43:05.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4178" for this suite. • [SLOW TEST:6.120 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":3,"skipped":69,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:43:05.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Oct 5 16:43:06.599: INFO: created pod pod-service-account-defaultsa Oct 5 16:43:06.599: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 5 16:43:06.755: INFO: created pod pod-service-account-mountsa Oct 5 16:43:06.755: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 5 16:43:06.849: INFO: created pod pod-service-account-nomountsa Oct 5 16:43:06.849: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 5 16:43:07.190: INFO: created pod pod-service-account-defaultsa-mountspec Oct 5 16:43:07.190: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 5 16:43:07.418: INFO: created pod pod-service-account-mountsa-mountspec Oct 5 16:43:07.418: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 5 16:43:07.721: INFO: created pod pod-service-account-nomountsa-mountspec Oct 5 16:43:07.721: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 5 16:43:08.104: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 5 16:43:08.104: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 5 16:43:08.376: INFO: created pod pod-service-account-mountsa-nomountspec Oct 5 16:43:08.376: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 5 16:43:08.927: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 5 16:43:08.927: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:43:08.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6343" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":4,"skipped":74,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:43:09.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 16:43:13.613: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 16:43:16.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 16:43:18.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 16:43:20.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 16:43:22.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737512993, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 16:43:25.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:43:25.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6492" for this suite. STEP: Destroying namespace "webhook-6492-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.028 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:43:25.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 5 16:43:33.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:33.874: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:35.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:35.993: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:37.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:37.878: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:39.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:39.879: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:41.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:41.878: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:43.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:43.880: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:45.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:45.879: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:47.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:47.879: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 16:43:49.874: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 16:43:49.914: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:43:49.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6647" for this suite. • [SLOW TEST:24.324 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:43:49.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 16:43:50.542: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 16:43:50.671: INFO: Waiting for terminating namespaces to be deleted... Oct 5 16:43:50.694: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 5 16:43:50.735: INFO: rally-ecef50cb-0uaw2fc4-q8f5p from c-rally-ecef50cb-yqyhznqd started at 2020-10-05 16:43:34 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.735: INFO: Container rally-ecef50cb-0uaw2fc4 ready: true, restart count 0 Oct 5 16:43:50.735: INFO: pod-handle-http-request from container-lifecycle-hook-6647 started at 2020-10-05 16:43:25 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.735: INFO: Container pod-handle-http-request ready: true, restart count 0 Oct 5 16:43:50.735: INFO: kindnet-9tmlz from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.735: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 16:43:50.735: INFO: kube-proxy-fk9hq from kube-system started at 2020-09-23 08:30:39 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.735: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 16:43:50.735: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 5 16:43:50.772: INFO: rally-ecef50cb-0uaw2fc4-jgnj8 from c-rally-ecef50cb-yqyhznqd started at 2020-10-05 16:43:33 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.772: INFO: Container rally-ecef50cb-0uaw2fc4 ready: true, restart count 0 Oct 5 16:43:50.772: INFO: kindnet-z6tnh from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.772: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 16:43:50.772: INFO: kube-proxy-whjz5 from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 16:43:50.772: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-aef968e5-8255-4622-a005-9077aefe2706 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-aef968e5-8255-4622-a005-9077aefe2706 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-aef968e5-8255-4622-a005-9077aefe2706 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:49:01.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7789" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:311.296 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":7,"skipped":126,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:49:01.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-14786259-e6d7-4c01-8c92-1f5794caeb59 in namespace container-probe-8392 Oct 5 16:49:05.410: INFO: Started pod busybox-14786259-e6d7-4c01-8c92-1f5794caeb59 in namespace container-probe-8392 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 16:49:05.413: INFO: Initial restart count of pod busybox-14786259-e6d7-4c01-8c92-1f5794caeb59 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:07.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8392" for this suite. • [SLOW TEST:246.082 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:07.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Oct 5 16:53:07.442: INFO: created test-event-1 Oct 5 16:53:07.446: INFO: created test-event-2 Oct 5 16:53:07.467: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 5 16:53:07.482: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 5 16:53:07.501: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:07.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5085" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":9,"skipped":175,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:07.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9bcb384a-355e-453b-ab11-b7c39a35a66f STEP: Creating a pod to test consume configMaps Oct 5 16:53:07.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb" in namespace "projected-2385" to be "Succeeded or Failed" Oct 5 16:53:07.621: INFO: Pod "pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.703319ms Oct 5 16:53:09.625: INFO: Pod "pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026757884s Oct 5 16:53:11.629: INFO: Pod "pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03116415s STEP: Saw pod success Oct 5 16:53:11.629: INFO: Pod "pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb" satisfied condition "Succeeded or Failed" Oct 5 16:53:11.632: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb container projected-configmap-volume-test: STEP: delete the pod Oct 5 16:53:11.773: INFO: Waiting for pod pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb to disappear Oct 5 16:53:11.906: INFO: Pod pod-projected-configmaps-049921f8-7bd7-493c-b059-37c7418a48cb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:11.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2385" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:11.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 5 16:53:16.563: INFO: Successfully updated pod "pod-update-6cff0312-9a52-4b0e-8a84-07e1d7c580dc" STEP: verifying the updated pod is in kubernetes Oct 5 16:53:16.579: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:16.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1298" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:16.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:16.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7401" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":12,"skipped":286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:16.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 16:53:16.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0" in namespace "downward-api-8198" to be "Succeeded or Failed" Oct 5 16:53:16.846: INFO: Pod "downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.777397ms Oct 5 16:53:18.850: INFO: Pod "downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019853915s Oct 5 16:53:20.854: INFO: Pod "downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023723294s STEP: Saw pod success Oct 5 16:53:20.854: INFO: Pod "downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0" satisfied condition "Succeeded or Failed" Oct 5 16:53:20.856: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0 container client-container: STEP: delete the pod Oct 5 16:53:20.945: INFO: Waiting for pod downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0 to disappear Oct 5 16:53:20.950: INFO: Pod downwardapi-volume-7b3e0267-a5c8-4905-9f98-88dd5e5fc9a0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:20.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8198" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":315,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:20.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5765 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5765 Oct 5 16:53:21.117: INFO: Found 0 stateful pods, waiting for 1 Oct 5 16:53:31.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 16:53:31.170: INFO: Deleting all statefulset in ns statefulset-5765 Oct 5 16:53:31.176: INFO: Scaling statefulset ss to 0 Oct 5 16:53:51.255: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 16:53:51.258: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:51.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5765" for this suite. • [SLOW TEST:30.338 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":14,"skipped":321,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:51.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 16:53:51.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89" in namespace "downward-api-2746" to be "Succeeded or Failed" Oct 5 16:53:51.419: INFO: Pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89": Phase="Pending", Reason="", readiness=false. Elapsed: 37.367332ms Oct 5 16:53:53.424: INFO: Pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042124562s Oct 5 16:53:55.433: INFO: Pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89": Phase="Running", Reason="", readiness=true. Elapsed: 4.051215392s Oct 5 16:53:57.439: INFO: Pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057110869s STEP: Saw pod success Oct 5 16:53:57.439: INFO: Pod "downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89" satisfied condition "Succeeded or Failed" Oct 5 16:53:57.442: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89 container client-container: STEP: delete the pod Oct 5 16:53:57.473: INFO: Waiting for pod downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89 to disappear Oct 5 16:53:57.483: INFO: Pod downwardapi-volume-c4587244-bece-40dc-bb7e-c04153363a89 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:57.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2746" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":332,"failed":0} SSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:57.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Oct 5 16:53:57.559: INFO: created test-podtemplate-1 Oct 5 16:53:57.561: INFO: created test-podtemplate-2 Oct 5 16:53:57.567: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 5 16:53:57.582: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 5 16:53:57.605: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:53:57.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9687" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":16,"skipped":338,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:53:57.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 16:54:02.264: INFO: Successfully updated pod "annotationupdate9b46dbfc-e1c5-4c85-a7e9-c1cdbf70024a" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:54:06.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-941" for this suite. • [SLOW TEST:8.686 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":349,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:54:06.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 16:54:06.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf" in namespace "projected-1985" to be "Succeeded or Failed" Oct 5 16:54:06.399: INFO: Pod "downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.21254ms Oct 5 16:54:08.403: INFO: Pod "downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015465806s Oct 5 16:54:10.416: INFO: Pod "downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028002285s STEP: Saw pod success Oct 5 16:54:10.416: INFO: Pod "downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf" satisfied condition "Succeeded or Failed" Oct 5 16:54:10.419: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf container client-container: STEP: delete the pod Oct 5 16:54:10.466: INFO: Waiting for pod downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf to disappear Oct 5 16:54:10.470: INFO: Pod downwardapi-volume-a41147ec-dd30-425e-af9d-d04188ee14bf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:54:10.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1985" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:54:10.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 16:54:10.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f23881b5-8356-40c7-84af-668648090102" in namespace "downward-api-6303" to be "Succeeded or Failed" Oct 5 16:54:10.806: INFO: Pod "downwardapi-volume-f23881b5-8356-40c7-84af-668648090102": Phase="Pending", Reason="", readiness=false. Elapsed: 10.521959ms Oct 5 16:54:12.835: INFO: Pod "downwardapi-volume-f23881b5-8356-40c7-84af-668648090102": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039668442s Oct 5 16:54:14.841: INFO: Pod "downwardapi-volume-f23881b5-8356-40c7-84af-668648090102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045906622s STEP: Saw pod success Oct 5 16:54:14.841: INFO: Pod "downwardapi-volume-f23881b5-8356-40c7-84af-668648090102" satisfied condition "Succeeded or Failed" Oct 5 16:54:14.844: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f23881b5-8356-40c7-84af-668648090102 container client-container: STEP: delete the pod Oct 5 16:54:15.041: INFO: Waiting for pod downwardapi-volume-f23881b5-8356-40c7-84af-668648090102 to disappear Oct 5 16:54:15.089: INFO: Pod downwardapi-volume-f23881b5-8356-40c7-84af-668648090102 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:54:15.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6303" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":407,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:54:15.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-1f0b2b3c-4582-4b46-a537-b7a363f369fc STEP: Creating configMap with name cm-test-opt-upd-2207da9d-d71c-48f2-ae3f-596144535f53 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1f0b2b3c-4582-4b46-a537-b7a363f369fc STEP: Updating configmap cm-test-opt-upd-2207da9d-d71c-48f2-ae3f-596144535f53 STEP: Creating configMap with name cm-test-opt-create-b4e1e8a7-1aef-40ae-91ea-0ae3cd927357 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:54:25.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4725" for this suite. • [SLOW TEST:10.460 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":418,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:54:25.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:54:25.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7582" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":21,"skipped":418,"failed":0} ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:54:25.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 16:56:26.004: INFO: Deleting pod "var-expansion-2e589bf1-0b14-4e7b-b72d-52fa4174b7a4" in namespace "var-expansion-3252" Oct 5 16:56:26.009: INFO: Wait up to 5m0s for pod "var-expansion-2e589bf1-0b14-4e7b-b72d-52fa4174b7a4" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:56:30.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3252" for this suite. • [SLOW TEST:124.161 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":22,"skipped":418,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:56:30.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-jxnw STEP: Creating a pod to test atomic-volume-subpath Oct 5 16:56:30.134: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jxnw" in namespace "subpath-2612" to be "Succeeded or Failed" Oct 5 16:56:30.137: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.255573ms Oct 5 16:56:32.143: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009332346s Oct 5 16:56:34.148: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 4.014048453s Oct 5 16:56:36.152: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 6.017496584s Oct 5 16:56:38.156: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 8.022119963s Oct 5 16:56:40.162: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 10.028132655s Oct 5 16:56:42.168: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 12.033898516s Oct 5 16:56:44.171: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 14.037472701s Oct 5 16:56:46.175: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 16.04114418s Oct 5 16:56:48.181: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 18.04721394s Oct 5 16:56:50.187: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 20.053233093s Oct 5 16:56:52.193: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Running", Reason="", readiness=true. Elapsed: 22.059440946s Oct 5 16:56:54.199: INFO: Pod "pod-subpath-test-projected-jxnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064722775s STEP: Saw pod success Oct 5 16:56:54.199: INFO: Pod "pod-subpath-test-projected-jxnw" satisfied condition "Succeeded or Failed" Oct 5 16:56:54.202: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-jxnw container test-container-subpath-projected-jxnw: STEP: delete the pod Oct 5 16:56:54.248: INFO: Waiting for pod pod-subpath-test-projected-jxnw to disappear Oct 5 16:56:54.253: INFO: Pod pod-subpath-test-projected-jxnw no longer exists STEP: Deleting pod pod-subpath-test-projected-jxnw Oct 5 16:56:54.253: INFO: Deleting pod "pod-subpath-test-projected-jxnw" in namespace "subpath-2612" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:56:54.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2612" for this suite. • [SLOW TEST:24.226 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":23,"skipped":428,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:56:54.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8082 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8082 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8082 Oct 5 16:56:54.398: INFO: Found 0 stateful pods, waiting for 1 Oct 5 16:57:04.403: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 5 16:57:04.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 16:57:07.694: INFO: stderr: "I1005 16:57:07.545821 84 log.go:181] (0xc000142370) (0xc000dac000) Create stream\nI1005 16:57:07.545909 84 log.go:181] (0xc000142370) (0xc000dac000) Stream added, broadcasting: 1\nI1005 16:57:07.548210 84 log.go:181] (0xc000142370) Reply frame received for 1\nI1005 16:57:07.548258 84 log.go:181] (0xc000142370) (0xc0007dc500) Create stream\nI1005 16:57:07.548269 84 log.go:181] (0xc000142370) (0xc0007dc500) Stream added, broadcasting: 3\nI1005 16:57:07.549506 84 log.go:181] (0xc000142370) Reply frame received for 3\nI1005 16:57:07.549547 84 log.go:181] (0xc000142370) (0xc000c4c000) Create stream\nI1005 16:57:07.549559 84 log.go:181] (0xc000142370) (0xc000c4c000) Stream added, broadcasting: 5\nI1005 16:57:07.550585 84 log.go:181] (0xc000142370) Reply frame received for 5\nI1005 16:57:07.640106 84 log.go:181] (0xc000142370) Data frame received for 5\nI1005 16:57:07.640140 84 log.go:181] (0xc000c4c000) (5) Data frame handling\nI1005 16:57:07.640160 84 log.go:181] (0xc000c4c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 16:57:07.686095 84 log.go:181] (0xc000142370) Data frame received for 3\nI1005 16:57:07.686144 84 log.go:181] (0xc0007dc500) (3) Data frame handling\nI1005 16:57:07.686161 84 log.go:181] (0xc0007dc500) (3) Data frame sent\nI1005 16:57:07.686218 84 log.go:181] (0xc000142370) Data frame received for 3\nI1005 16:57:07.686238 84 log.go:181] (0xc0007dc500) (3) Data frame handling\nI1005 16:57:07.686274 84 log.go:181] (0xc000142370) Data frame received for 5\nI1005 16:57:07.686308 84 log.go:181] (0xc000c4c000) (5) Data frame handling\nI1005 16:57:07.688947 84 log.go:181] (0xc000142370) Data frame received for 1\nI1005 16:57:07.688975 84 log.go:181] (0xc000dac000) (1) Data frame handling\nI1005 16:57:07.688988 84 log.go:181] (0xc000dac000) (1) Data frame sent\nI1005 16:57:07.689001 84 log.go:181] (0xc000142370) (0xc000dac000) Stream removed, broadcasting: 1\nI1005 16:57:07.689238 84 log.go:181] (0xc000142370) Go away received\nI1005 16:57:07.689369 84 log.go:181] (0xc000142370) (0xc000dac000) Stream removed, broadcasting: 1\nI1005 16:57:07.689388 84 log.go:181] (0xc000142370) (0xc0007dc500) Stream removed, broadcasting: 3\nI1005 16:57:07.689399 84 log.go:181] (0xc000142370) (0xc000c4c000) Stream removed, broadcasting: 5\n" Oct 5 16:57:07.694: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 16:57:07.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 16:57:07.698: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 5 16:57:17.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 16:57:17.703: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 16:57:17.727: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999458s Oct 5 16:57:18.731: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993677572s Oct 5 16:57:19.736: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989628299s Oct 5 16:57:20.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984885534s Oct 5 16:57:21.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.980294762s Oct 5 16:57:22.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975027892s Oct 5 16:57:23.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97073839s Oct 5 16:57:24.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.866906081s Oct 5 16:57:25.863: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.86231355s Oct 5 16:57:26.867: INFO: Verifying statefulset ss doesn't scale past 1 for another 857.585995ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8082 Oct 5 16:57:27.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 16:57:28.128: INFO: stderr: "I1005 16:57:28.031689 102 log.go:181] (0xc00037afd0) (0xc00038f360) Create stream\nI1005 16:57:28.031747 102 log.go:181] (0xc00037afd0) (0xc00038f360) Stream added, broadcasting: 1\nI1005 16:57:28.038738 102 log.go:181] (0xc00037afd0) Reply frame received for 1\nI1005 16:57:28.038789 102 log.go:181] (0xc00037afd0) (0xc0007921e0) Create stream\nI1005 16:57:28.038810 102 log.go:181] (0xc00037afd0) (0xc0007921e0) Stream added, broadcasting: 3\nI1005 16:57:28.040027 102 log.go:181] (0xc00037afd0) Reply frame received for 3\nI1005 16:57:28.040066 102 log.go:181] (0xc00037afd0) (0xc0004f92c0) Create stream\nI1005 16:57:28.040077 102 log.go:181] (0xc00037afd0) (0xc0004f92c0) Stream added, broadcasting: 5\nI1005 16:57:28.041711 102 log.go:181] (0xc00037afd0) Reply frame received for 5\nI1005 16:57:28.120242 102 log.go:181] (0xc00037afd0) Data frame received for 3\nI1005 16:57:28.120275 102 log.go:181] (0xc0007921e0) (3) Data frame handling\nI1005 16:57:28.120285 102 log.go:181] (0xc0007921e0) (3) Data frame sent\nI1005 16:57:28.120329 102 log.go:181] (0xc00037afd0) Data frame received for 5\nI1005 16:57:28.120372 102 log.go:181] (0xc0004f92c0) (5) Data frame handling\nI1005 16:57:28.120416 102 log.go:181] (0xc0004f92c0) (5) Data frame sent\nI1005 16:57:28.120441 102 log.go:181] (0xc00037afd0) Data frame received for 5\nI1005 16:57:28.120462 102 log.go:181] (0xc0004f92c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 16:57:28.120570 102 log.go:181] (0xc00037afd0) Data frame received for 3\nI1005 16:57:28.120609 102 log.go:181] (0xc0007921e0) (3) Data frame handling\nI1005 16:57:28.122369 102 log.go:181] (0xc00037afd0) Data frame received for 1\nI1005 16:57:28.122393 102 log.go:181] (0xc00038f360) (1) Data frame handling\nI1005 16:57:28.122415 102 log.go:181] (0xc00038f360) (1) Data frame sent\nI1005 16:57:28.122433 102 log.go:181] (0xc00037afd0) (0xc00038f360) Stream removed, broadcasting: 1\nI1005 16:57:28.122485 102 log.go:181] (0xc00037afd0) Go away received\nI1005 16:57:28.122856 102 log.go:181] (0xc00037afd0) (0xc00038f360) Stream removed, broadcasting: 1\nI1005 16:57:28.122878 102 log.go:181] (0xc00037afd0) (0xc0007921e0) Stream removed, broadcasting: 3\nI1005 16:57:28.122891 102 log.go:181] (0xc00037afd0) (0xc0004f92c0) Stream removed, broadcasting: 5\n" Oct 5 16:57:28.128: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 16:57:28.128: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 16:57:28.133: INFO: Found 1 stateful pods, waiting for 3 Oct 5 16:57:38.138: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 16:57:38.138: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 16:57:38.138: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 5 16:57:38.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 16:57:38.394: INFO: stderr: "I1005 16:57:38.289988 120 log.go:181] (0xc000c1d1e0) (0xc000c14640) Create stream\nI1005 16:57:38.290190 120 log.go:181] (0xc000c1d1e0) (0xc000c14640) Stream added, broadcasting: 1\nI1005 16:57:38.296661 120 log.go:181] (0xc000c1d1e0) Reply frame received for 1\nI1005 16:57:38.296706 120 log.go:181] (0xc000c1d1e0) (0xc0007cc000) Create stream\nI1005 16:57:38.296720 120 log.go:181] (0xc000c1d1e0) (0xc0007cc000) Stream added, broadcasting: 3\nI1005 16:57:38.297563 120 log.go:181] (0xc000c1d1e0) Reply frame received for 3\nI1005 16:57:38.297594 120 log.go:181] (0xc000c1d1e0) (0xc000b8c0a0) Create stream\nI1005 16:57:38.297602 120 log.go:181] (0xc000c1d1e0) (0xc000b8c0a0) Stream added, broadcasting: 5\nI1005 16:57:38.298279 120 log.go:181] (0xc000c1d1e0) Reply frame received for 5\nI1005 16:57:38.386128 120 log.go:181] (0xc000c1d1e0) Data frame received for 3\nI1005 16:57:38.386157 120 log.go:181] (0xc0007cc000) (3) Data frame handling\nI1005 16:57:38.386165 120 log.go:181] (0xc0007cc000) (3) Data frame sent\nI1005 16:57:38.386171 120 log.go:181] (0xc000c1d1e0) Data frame received for 3\nI1005 16:57:38.386176 120 log.go:181] (0xc0007cc000) (3) Data frame handling\nI1005 16:57:38.386207 120 log.go:181] (0xc000c1d1e0) Data frame received for 5\nI1005 16:57:38.386213 120 log.go:181] (0xc000b8c0a0) (5) Data frame handling\nI1005 16:57:38.386220 120 log.go:181] (0xc000b8c0a0) (5) Data frame sent\nI1005 16:57:38.386226 120 log.go:181] (0xc000c1d1e0) Data frame received for 5\nI1005 16:57:38.386231 120 log.go:181] (0xc000b8c0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 16:57:38.388059 120 log.go:181] (0xc000c1d1e0) Data frame received for 1\nI1005 16:57:38.388073 120 log.go:181] (0xc000c14640) (1) Data frame handling\nI1005 16:57:38.388084 120 log.go:181] (0xc000c14640) (1) Data frame sent\nI1005 16:57:38.388097 120 log.go:181] (0xc000c1d1e0) (0xc000c14640) Stream removed, broadcasting: 1\nI1005 16:57:38.388114 120 log.go:181] (0xc000c1d1e0) Go away received\nI1005 16:57:38.388529 120 log.go:181] (0xc000c1d1e0) (0xc000c14640) Stream removed, broadcasting: 1\nI1005 16:57:38.388548 120 log.go:181] (0xc000c1d1e0) (0xc0007cc000) Stream removed, broadcasting: 3\nI1005 16:57:38.388558 120 log.go:181] (0xc000c1d1e0) (0xc000b8c0a0) Stream removed, broadcasting: 5\n" Oct 5 16:57:38.394: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 16:57:38.394: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 16:57:38.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 16:57:38.638: INFO: stderr: "I1005 16:57:38.534882 138 log.go:181] (0xc00033cfd0) (0xc0003e2d20) Create stream\nI1005 16:57:38.534954 138 log.go:181] (0xc00033cfd0) (0xc0003e2d20) Stream added, broadcasting: 1\nI1005 16:57:38.540247 138 log.go:181] (0xc00033cfd0) Reply frame received for 1\nI1005 16:57:38.540310 138 log.go:181] (0xc00033cfd0) (0xc00031f2c0) Create stream\nI1005 16:57:38.540329 138 log.go:181] (0xc00033cfd0) (0xc00031f2c0) Stream added, broadcasting: 3\nI1005 16:57:38.541475 138 log.go:181] (0xc00033cfd0) Reply frame received for 3\nI1005 16:57:38.541519 138 log.go:181] (0xc00033cfd0) (0xc0004f81e0) Create stream\nI1005 16:57:38.541530 138 log.go:181] (0xc00033cfd0) (0xc0004f81e0) Stream added, broadcasting: 5\nI1005 16:57:38.542488 138 log.go:181] (0xc00033cfd0) Reply frame received for 5\nI1005 16:57:38.592556 138 log.go:181] (0xc00033cfd0) Data frame received for 5\nI1005 16:57:38.592601 138 log.go:181] (0xc0004f81e0) (5) Data frame handling\nI1005 16:57:38.592639 138 log.go:181] (0xc0004f81e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 16:57:38.631461 138 log.go:181] (0xc00033cfd0) Data frame received for 5\nI1005 16:57:38.631493 138 log.go:181] (0xc0004f81e0) (5) Data frame handling\nI1005 16:57:38.631517 138 log.go:181] (0xc00033cfd0) Data frame received for 3\nI1005 16:57:38.631528 138 log.go:181] (0xc00031f2c0) (3) Data frame handling\nI1005 16:57:38.631536 138 log.go:181] (0xc00031f2c0) (3) Data frame sent\nI1005 16:57:38.631541 138 log.go:181] (0xc00033cfd0) Data frame received for 3\nI1005 16:57:38.631544 138 log.go:181] (0xc00031f2c0) (3) Data frame handling\nI1005 16:57:38.633070 138 log.go:181] (0xc00033cfd0) Data frame received for 1\nI1005 16:57:38.633101 138 log.go:181] (0xc0003e2d20) (1) Data frame handling\nI1005 16:57:38.633112 138 log.go:181] (0xc0003e2d20) (1) Data frame sent\nI1005 16:57:38.633123 138 log.go:181] (0xc00033cfd0) (0xc0003e2d20) Stream removed, broadcasting: 1\nI1005 16:57:38.633140 138 log.go:181] (0xc00033cfd0) Go away received\nI1005 16:57:38.633434 138 log.go:181] (0xc00033cfd0) (0xc0003e2d20) Stream removed, broadcasting: 1\nI1005 16:57:38.633447 138 log.go:181] (0xc00033cfd0) (0xc00031f2c0) Stream removed, broadcasting: 3\nI1005 16:57:38.633453 138 log.go:181] (0xc00033cfd0) (0xc0004f81e0) Stream removed, broadcasting: 5\n" Oct 5 16:57:38.638: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 16:57:38.638: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 16:57:38.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 16:57:38.878: INFO: stderr: "I1005 16:57:38.776006 156 log.go:181] (0xc000ad9130) (0xc000ad0820) Create stream\nI1005 16:57:38.776056 156 log.go:181] (0xc000ad9130) (0xc000ad0820) Stream added, broadcasting: 1\nI1005 16:57:38.782327 156 log.go:181] (0xc000ad9130) Reply frame received for 1\nI1005 16:57:38.782381 156 log.go:181] (0xc000ad9130) (0xc000c5c000) Create stream\nI1005 16:57:38.782397 156 log.go:181] (0xc000ad9130) (0xc000c5c000) Stream added, broadcasting: 3\nI1005 16:57:38.783393 156 log.go:181] (0xc000ad9130) Reply frame received for 3\nI1005 16:57:38.783535 156 log.go:181] (0xc000ad9130) (0xc000ad0000) Create stream\nI1005 16:57:38.783554 156 log.go:181] (0xc000ad9130) (0xc000ad0000) Stream added, broadcasting: 5\nI1005 16:57:38.784424 156 log.go:181] (0xc000ad9130) Reply frame received for 5\nI1005 16:57:38.841109 156 log.go:181] (0xc000ad9130) Data frame received for 5\nI1005 16:57:38.841144 156 log.go:181] (0xc000ad0000) (5) Data frame handling\nI1005 16:57:38.841183 156 log.go:181] (0xc000ad0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 16:57:38.870410 156 log.go:181] (0xc000ad9130) Data frame received for 3\nI1005 16:57:38.870578 156 log.go:181] (0xc000c5c000) (3) Data frame handling\nI1005 16:57:38.870628 156 log.go:181] (0xc000ad9130) Data frame received for 5\nI1005 16:57:38.870676 156 log.go:181] (0xc000ad0000) (5) Data frame handling\nI1005 16:57:38.870716 156 log.go:181] (0xc000c5c000) (3) Data frame sent\nI1005 16:57:38.870741 156 log.go:181] (0xc000ad9130) Data frame received for 3\nI1005 16:57:38.870760 156 log.go:181] (0xc000c5c000) (3) Data frame handling\nI1005 16:57:38.872346 156 log.go:181] (0xc000ad9130) Data frame received for 1\nI1005 16:57:38.872381 156 log.go:181] (0xc000ad0820) (1) Data frame handling\nI1005 16:57:38.872403 156 log.go:181] (0xc000ad0820) (1) Data frame sent\nI1005 16:57:38.872426 156 log.go:181] (0xc000ad9130) (0xc000ad0820) Stream removed, broadcasting: 1\nI1005 16:57:38.872541 156 log.go:181] (0xc000ad9130) Go away received\nI1005 16:57:38.873012 156 log.go:181] (0xc000ad9130) (0xc000ad0820) Stream removed, broadcasting: 1\nI1005 16:57:38.873036 156 log.go:181] (0xc000ad9130) (0xc000c5c000) Stream removed, broadcasting: 3\nI1005 16:57:38.873049 156 log.go:181] (0xc000ad9130) (0xc000ad0000) Stream removed, broadcasting: 5\n" Oct 5 16:57:38.879: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 16:57:38.879: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 16:57:38.879: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 16:57:38.882: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 5 16:57:48.894: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 16:57:48.894: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 5 16:57:48.894: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 5 16:57:48.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999669s Oct 5 16:57:49.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989246801s Oct 5 16:57:50.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984013401s Oct 5 16:57:51.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.97939684s Oct 5 16:57:52.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.972519805s Oct 5 16:57:53.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966158543s Oct 5 16:57:54.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960152232s Oct 5 16:57:55.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953514038s Oct 5 16:57:56.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946139447s Oct 5 16:57:57.966: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.978219ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8082 Oct 5 16:57:58.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 16:57:59.229: INFO: stderr: "I1005 16:57:59.112638 174 log.go:181] (0xc000f1b3f0) (0xc000f12960) Create stream\nI1005 16:57:59.112684 174 log.go:181] (0xc000f1b3f0) (0xc000f12960) Stream added, broadcasting: 1\nI1005 16:57:59.117689 174 log.go:181] (0xc000f1b3f0) Reply frame received for 1\nI1005 16:57:59.117727 174 log.go:181] (0xc000f1b3f0) (0xc000f12000) Create stream\nI1005 16:57:59.117746 174 log.go:181] (0xc000f1b3f0) (0xc000f12000) Stream added, broadcasting: 3\nI1005 16:57:59.118698 174 log.go:181] (0xc000f1b3f0) Reply frame received for 3\nI1005 16:57:59.118728 174 log.go:181] (0xc000f1b3f0) (0xc0004261e0) Create stream\nI1005 16:57:59.118737 174 log.go:181] (0xc000f1b3f0) (0xc0004261e0) Stream added, broadcasting: 5\nI1005 16:57:59.119654 174 log.go:181] (0xc000f1b3f0) Reply frame received for 5\nI1005 16:57:59.218667 174 log.go:181] (0xc000f1b3f0) Data frame received for 3\nI1005 16:57:59.218709 174 log.go:181] (0xc000f12000) (3) Data frame handling\nI1005 16:57:59.218741 174 log.go:181] (0xc000f12000) (3) Data frame sent\nI1005 16:57:59.218763 174 log.go:181] (0xc000f1b3f0) Data frame received for 3\nI1005 16:57:59.218779 174 log.go:181] (0xc000f12000) (3) Data frame handling\nI1005 16:57:59.218893 174 log.go:181] (0xc000f1b3f0) Data frame received for 5\nI1005 16:57:59.218922 174 log.go:181] (0xc0004261e0) (5) Data frame handling\nI1005 16:57:59.218933 174 log.go:181] (0xc0004261e0) (5) Data frame sent\nI1005 16:57:59.218943 174 log.go:181] (0xc000f1b3f0) Data frame received for 5\nI1005 16:57:59.218951 174 log.go:181] (0xc0004261e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 16:57:59.222392 174 log.go:181] (0xc000f1b3f0) Data frame received for 1\nI1005 16:57:59.222418 174 log.go:181] (0xc000f12960) (1) Data frame handling\nI1005 16:57:59.222435 174 log.go:181] (0xc000f12960) (1) Data frame sent\nI1005 16:57:59.222455 174 log.go:181] (0xc000f1b3f0) (0xc000f12960) Stream removed, broadcasting: 1\nI1005 16:57:59.222896 174 log.go:181] (0xc000f1b3f0) (0xc000f12960) Stream removed, broadcasting: 1\nI1005 16:57:59.222925 174 log.go:181] (0xc000f1b3f0) (0xc000f12000) Stream removed, broadcasting: 3\nI1005 16:57:59.223095 174 log.go:181] (0xc000f1b3f0) (0xc0004261e0) Stream removed, broadcasting: 5\n" Oct 5 16:57:59.229: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 16:57:59.229: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 16:57:59.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 16:57:59.444: INFO: stderr: "I1005 16:57:59.375444 192 log.go:181] (0xc00074c9a0) (0xc0005c05a0) Create stream\nI1005 16:57:59.375501 192 log.go:181] (0xc00074c9a0) (0xc0005c05a0) Stream added, broadcasting: 1\nI1005 16:57:59.377240 192 log.go:181] (0xc00074c9a0) Reply frame received for 1\nI1005 16:57:59.377262 192 log.go:181] (0xc00074c9a0) (0xc0005c0640) Create stream\nI1005 16:57:59.377268 192 log.go:181] (0xc00074c9a0) (0xc0005c0640) Stream added, broadcasting: 3\nI1005 16:57:59.377832 192 log.go:181] (0xc00074c9a0) Reply frame received for 3\nI1005 16:57:59.377856 192 log.go:181] (0xc00074c9a0) (0xc0005c06e0) Create stream\nI1005 16:57:59.377863 192 log.go:181] (0xc00074c9a0) (0xc0005c06e0) Stream added, broadcasting: 5\nI1005 16:57:59.378362 192 log.go:181] (0xc00074c9a0) Reply frame received for 5\nI1005 16:57:59.435960 192 log.go:181] (0xc00074c9a0) Data frame received for 5\nI1005 16:57:59.436020 192 log.go:181] (0xc0005c06e0) (5) Data frame handling\nI1005 16:57:59.436078 192 log.go:181] (0xc0005c06e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 16:57:59.436121 192 log.go:181] (0xc00074c9a0) Data frame received for 5\nI1005 16:57:59.436138 192 log.go:181] (0xc0005c06e0) (5) Data frame handling\nI1005 16:57:59.436163 192 log.go:181] (0xc00074c9a0) Data frame received for 3\nI1005 16:57:59.436180 192 log.go:181] (0xc0005c0640) (3) Data frame handling\nI1005 16:57:59.436208 192 log.go:181] (0xc0005c0640) (3) Data frame sent\nI1005 16:57:59.436227 192 log.go:181] (0xc00074c9a0) Data frame received for 3\nI1005 16:57:59.436237 192 log.go:181] (0xc0005c0640) (3) Data frame handling\nI1005 16:57:59.438078 192 log.go:181] (0xc00074c9a0) Data frame received for 1\nI1005 16:57:59.438222 192 log.go:181] (0xc0005c05a0) (1) Data frame handling\nI1005 16:57:59.438291 192 log.go:181] (0xc0005c05a0) (1) Data frame sent\nI1005 16:57:59.438346 192 log.go:181] (0xc00074c9a0) (0xc0005c05a0) Stream removed, broadcasting: 1\nI1005 16:57:59.438477 192 log.go:181] (0xc00074c9a0) Go away received\nI1005 16:57:59.438976 192 log.go:181] (0xc00074c9a0) (0xc0005c05a0) Stream removed, broadcasting: 1\nI1005 16:57:59.438998 192 log.go:181] (0xc00074c9a0) (0xc0005c0640) Stream removed, broadcasting: 3\nI1005 16:57:59.439008 192 log.go:181] (0xc00074c9a0) (0xc0005c06e0) Stream removed, broadcasting: 5\n" Oct 5 16:57:59.445: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 16:57:59.445: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 16:57:59.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8082 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 16:57:59.643: INFO: stderr: "I1005 16:57:59.573823 210 log.go:181] (0xc000b41ad0) (0xc000b38a00) Create stream\nI1005 16:57:59.573874 210 log.go:181] (0xc000b41ad0) (0xc000b38a00) Stream added, broadcasting: 1\nI1005 16:57:59.576077 210 log.go:181] (0xc000b41ad0) Reply frame received for 1\nI1005 16:57:59.576109 210 log.go:181] (0xc000b41ad0) (0xc000cbe000) Create stream\nI1005 16:57:59.576122 210 log.go:181] (0xc000b41ad0) (0xc000cbe000) Stream added, broadcasting: 3\nI1005 16:57:59.577158 210 log.go:181] (0xc000b41ad0) Reply frame received for 3\nI1005 16:57:59.577192 210 log.go:181] (0xc000b41ad0) (0xc0009a25a0) Create stream\nI1005 16:57:59.577201 210 log.go:181] (0xc000b41ad0) (0xc0009a25a0) Stream added, broadcasting: 5\nI1005 16:57:59.578014 210 log.go:181] (0xc000b41ad0) Reply frame received for 5\nI1005 16:57:59.634983 210 log.go:181] (0xc000b41ad0) Data frame received for 3\nI1005 16:57:59.635050 210 log.go:181] (0xc000cbe000) (3) Data frame handling\nI1005 16:57:59.635076 210 log.go:181] (0xc000cbe000) (3) Data frame sent\nI1005 16:57:59.635095 210 log.go:181] (0xc000b41ad0) Data frame received for 3\nI1005 16:57:59.635112 210 log.go:181] (0xc000cbe000) (3) Data frame handling\nI1005 16:57:59.635145 210 log.go:181] (0xc000b41ad0) Data frame received for 5\nI1005 16:57:59.635163 210 log.go:181] (0xc0009a25a0) (5) Data frame handling\nI1005 16:57:59.635181 210 log.go:181] (0xc0009a25a0) (5) Data frame sent\nI1005 16:57:59.635199 210 log.go:181] (0xc000b41ad0) Data frame received for 5\nI1005 16:57:59.635215 210 log.go:181] (0xc0009a25a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 16:57:59.637065 210 log.go:181] (0xc000b41ad0) Data frame received for 1\nI1005 16:57:59.637173 210 log.go:181] (0xc000b38a00) (1) Data frame handling\nI1005 16:57:59.637230 210 log.go:181] (0xc000b38a00) (1) Data frame sent\nI1005 16:57:59.637287 210 log.go:181] (0xc000b41ad0) (0xc000b38a00) Stream removed, broadcasting: 1\nI1005 16:57:59.637323 210 log.go:181] (0xc000b41ad0) Go away received\nI1005 16:57:59.637870 210 log.go:181] (0xc000b41ad0) (0xc000b38a00) Stream removed, broadcasting: 1\nI1005 16:57:59.637896 210 log.go:181] (0xc000b41ad0) (0xc000cbe000) Stream removed, broadcasting: 3\nI1005 16:57:59.637908 210 log.go:181] (0xc000b41ad0) (0xc0009a25a0) Stream removed, broadcasting: 5\n" Oct 5 16:57:59.643: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 16:57:59.643: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 16:57:59.643: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 16:58:29.658: INFO: Deleting all statefulset in ns statefulset-8082 Oct 5 16:58:29.661: INFO: Scaling statefulset ss to 0 Oct 5 16:58:29.673: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 16:58:29.676: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:58:29.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8082" for this suite. • [SLOW TEST:95.435 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":24,"skipped":433,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:58:29.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-72184210-1ed4-4fd4-be92-5d30ec0bf1c3 STEP: Creating a pod to test consume configMaps Oct 5 16:58:29.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11" in namespace "projected-3094" to be "Succeeded or Failed" Oct 5 16:58:29.795: INFO: Pod "pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 3.469781ms Oct 5 16:58:31.800: INFO: Pod "pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390893s Oct 5 16:58:33.823: INFO: Pod "pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032162803s STEP: Saw pod success Oct 5 16:58:33.824: INFO: Pod "pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11" satisfied condition "Succeeded or Failed" Oct 5 16:58:33.826: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11 container projected-configmap-volume-test: STEP: delete the pod Oct 5 16:58:33.869: INFO: Waiting for pod pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11 to disappear Oct 5 16:58:33.882: INFO: Pod pod-projected-configmaps-4393c046-64f2-4fdf-b496-6a375b8a7b11 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:58:33.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3094" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:58:33.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-878 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-878 STEP: creating replication controller externalsvc in namespace services-878 I1005 16:58:34.353194 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-878, replica count: 2 I1005 16:58:37.403634 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 16:58:40.403877 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 5 16:58:40.471: INFO: Creating new exec pod Oct 5 16:58:44.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-878 execpod6bxnw -- /bin/sh -x -c nslookup nodeport-service.services-878.svc.cluster.local' Oct 5 16:58:44.737: INFO: stderr: "I1005 16:58:44.633633 227 log.go:181] (0xc000d020b0) (0xc000be00a0) Create stream\nI1005 16:58:44.633688 227 log.go:181] (0xc000d020b0) (0xc000be00a0) Stream added, broadcasting: 1\nI1005 16:58:44.635426 227 log.go:181] (0xc000d020b0) Reply frame received for 1\nI1005 16:58:44.635470 227 log.go:181] (0xc000d020b0) (0xc000436dc0) Create stream\nI1005 16:58:44.635484 227 log.go:181] (0xc000d020b0) (0xc000436dc0) Stream added, broadcasting: 3\nI1005 16:58:44.636428 227 log.go:181] (0xc000d020b0) Reply frame received for 3\nI1005 16:58:44.636472 227 log.go:181] (0xc000d020b0) (0xc000be0140) Create stream\nI1005 16:58:44.636488 227 log.go:181] (0xc000d020b0) (0xc000be0140) Stream added, broadcasting: 5\nI1005 16:58:44.637659 227 log.go:181] (0xc000d020b0) Reply frame received for 5\nI1005 16:58:44.718041 227 log.go:181] (0xc000d020b0) Data frame received for 5\nI1005 16:58:44.718064 227 log.go:181] (0xc000be0140) (5) Data frame handling\nI1005 16:58:44.718075 227 log.go:181] (0xc000be0140) (5) Data frame sent\n+ nslookup nodeport-service.services-878.svc.cluster.local\nI1005 16:58:44.727808 227 log.go:181] (0xc000d020b0) Data frame received for 3\nI1005 16:58:44.727836 227 log.go:181] (0xc000436dc0) (3) Data frame handling\nI1005 16:58:44.727873 227 log.go:181] (0xc000436dc0) (3) Data frame sent\nI1005 16:58:44.729048 227 log.go:181] (0xc000d020b0) Data frame received for 3\nI1005 16:58:44.729083 227 log.go:181] (0xc000436dc0) (3) Data frame handling\nI1005 16:58:44.729109 227 log.go:181] (0xc000436dc0) (3) Data frame sent\nI1005 16:58:44.729486 227 log.go:181] (0xc000d020b0) Data frame received for 3\nI1005 16:58:44.729542 227 log.go:181] (0xc000436dc0) (3) Data frame handling\nI1005 16:58:44.729647 227 log.go:181] (0xc000d020b0) Data frame received for 5\nI1005 16:58:44.729671 227 log.go:181] (0xc000be0140) (5) Data frame handling\nI1005 16:58:44.731903 227 log.go:181] (0xc000d020b0) Data frame received for 1\nI1005 16:58:44.731943 227 log.go:181] (0xc000be00a0) (1) Data frame handling\nI1005 16:58:44.731961 227 log.go:181] (0xc000be00a0) (1) Data frame sent\nI1005 16:58:44.731985 227 log.go:181] (0xc000d020b0) (0xc000be00a0) Stream removed, broadcasting: 1\nI1005 16:58:44.732007 227 log.go:181] (0xc000d020b0) Go away received\nI1005 16:58:44.732464 227 log.go:181] (0xc000d020b0) (0xc000be00a0) Stream removed, broadcasting: 1\nI1005 16:58:44.732489 227 log.go:181] (0xc000d020b0) (0xc000436dc0) Stream removed, broadcasting: 3\nI1005 16:58:44.732500 227 log.go:181] (0xc000d020b0) (0xc000be0140) Stream removed, broadcasting: 5\n" Oct 5 16:58:44.737: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-878.svc.cluster.local\tcanonical name = externalsvc.services-878.svc.cluster.local.\nName:\texternalsvc.services-878.svc.cluster.local\nAddress: 10.98.15.224\n\n" STEP: deleting ReplicationController externalsvc in namespace services-878, will wait for the garbage collector to delete the pods Oct 5 16:58:44.798: INFO: Deleting ReplicationController externalsvc took: 7.129607ms Oct 5 16:58:44.898: INFO: Terminating ReplicationController externalsvc pods took: 100.21346ms Oct 5 16:58:59.933: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:58:59.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-878" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.065 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":26,"skipped":489,"failed":0} S ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:58:59.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1470 Oct 5 16:59:04.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 5 16:59:04.297: INFO: stderr: "I1005 16:59:04.208643 245 log.go:181] (0xc0005bf600) (0xc0005b6aa0) Create stream\nI1005 16:59:04.208725 245 log.go:181] (0xc0005bf600) (0xc0005b6aa0) Stream added, broadcasting: 1\nI1005 16:59:04.211083 245 log.go:181] (0xc0005bf600) Reply frame received for 1\nI1005 16:59:04.211140 245 log.go:181] (0xc0005bf600) (0xc0005b6b40) Create stream\nI1005 16:59:04.211163 245 log.go:181] (0xc0005bf600) (0xc0005b6b40) Stream added, broadcasting: 3\nI1005 16:59:04.211919 245 log.go:181] (0xc0005bf600) Reply frame received for 3\nI1005 16:59:04.211954 245 log.go:181] (0xc0005bf600) (0xc0005b6be0) Create stream\nI1005 16:59:04.211964 245 log.go:181] (0xc0005bf600) (0xc0005b6be0) Stream added, broadcasting: 5\nI1005 16:59:04.212742 245 log.go:181] (0xc0005bf600) Reply frame received for 5\nI1005 16:59:04.285354 245 log.go:181] (0xc0005bf600) Data frame received for 5\nI1005 16:59:04.285381 245 log.go:181] (0xc0005b6be0) (5) Data frame handling\nI1005 16:59:04.285394 245 log.go:181] (0xc0005b6be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1005 16:59:04.288808 245 log.go:181] (0xc0005bf600) Data frame received for 3\nI1005 16:59:04.288961 245 log.go:181] (0xc0005b6b40) (3) Data frame handling\nI1005 16:59:04.289007 245 log.go:181] (0xc0005b6b40) (3) Data frame sent\nI1005 16:59:04.289276 245 log.go:181] (0xc0005bf600) Data frame received for 3\nI1005 16:59:04.289293 245 log.go:181] (0xc0005b6b40) (3) Data frame handling\nI1005 16:59:04.289363 245 log.go:181] (0xc0005bf600) Data frame received for 5\nI1005 16:59:04.289374 245 log.go:181] (0xc0005b6be0) (5) Data frame handling\nI1005 16:59:04.291330 245 log.go:181] (0xc0005bf600) Data frame received for 1\nI1005 16:59:04.291364 245 log.go:181] (0xc0005b6aa0) (1) Data frame handling\nI1005 16:59:04.291390 245 log.go:181] (0xc0005b6aa0) (1) Data frame sent\nI1005 16:59:04.291413 245 log.go:181] (0xc0005bf600) (0xc0005b6aa0) Stream removed, broadcasting: 1\nI1005 16:59:04.291434 245 log.go:181] (0xc0005bf600) Go away received\nI1005 16:59:04.291917 245 log.go:181] (0xc0005bf600) (0xc0005b6aa0) Stream removed, broadcasting: 1\nI1005 16:59:04.291934 245 log.go:181] (0xc0005bf600) (0xc0005b6b40) Stream removed, broadcasting: 3\nI1005 16:59:04.291942 245 log.go:181] (0xc0005bf600) (0xc0005b6be0) Stream removed, broadcasting: 5\n" Oct 5 16:59:04.297: INFO: stdout: "iptables" Oct 5 16:59:04.297: INFO: proxyMode: iptables Oct 5 16:59:04.316: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 16:59:04.337: INFO: Pod kube-proxy-mode-detector still exists Oct 5 16:59:06.337: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 16:59:06.380: INFO: Pod kube-proxy-mode-detector still exists Oct 5 16:59:08.337: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 16:59:08.342: INFO: Pod kube-proxy-mode-detector still exists Oct 5 16:59:10.337: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 16:59:10.362: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1470 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1470 I1005 16:59:10.419008 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1470, replica count: 3 I1005 16:59:13.469341 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 16:59:16.469616 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 16:59:16.479: INFO: Creating new exec pod Oct 5 16:59:21.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 execpod-affinitycgx6k -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Oct 5 16:59:21.733: INFO: stderr: "I1005 16:59:21.631435 262 log.go:181] (0xc000f94dc0) (0xc00094dae0) Create stream\nI1005 16:59:21.631496 262 log.go:181] (0xc000f94dc0) (0xc00094dae0) Stream added, broadcasting: 1\nI1005 16:59:21.636942 262 log.go:181] (0xc000f94dc0) Reply frame received for 1\nI1005 16:59:21.636987 262 log.go:181] (0xc000f94dc0) (0xc00012c280) Create stream\nI1005 16:59:21.637000 262 log.go:181] (0xc000f94dc0) (0xc00012c280) Stream added, broadcasting: 3\nI1005 16:59:21.637869 262 log.go:181] (0xc000f94dc0) Reply frame received for 3\nI1005 16:59:21.637935 262 log.go:181] (0xc000f94dc0) (0xc00012d040) Create stream\nI1005 16:59:21.637954 262 log.go:181] (0xc000f94dc0) (0xc00012d040) Stream added, broadcasting: 5\nI1005 16:59:21.638896 262 log.go:181] (0xc000f94dc0) Reply frame received for 5\nI1005 16:59:21.725861 262 log.go:181] (0xc000f94dc0) Data frame received for 5\nI1005 16:59:21.725908 262 log.go:181] (0xc00012d040) (5) Data frame handling\nI1005 16:59:21.725949 262 log.go:181] (0xc00012d040) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI1005 16:59:21.726266 262 log.go:181] (0xc000f94dc0) Data frame received for 5\nI1005 16:59:21.726299 262 log.go:181] (0xc00012d040) (5) Data frame handling\nI1005 16:59:21.726334 262 log.go:181] (0xc00012d040) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1005 16:59:21.726452 262 log.go:181] (0xc000f94dc0) Data frame received for 3\nI1005 16:59:21.726478 262 log.go:181] (0xc00012c280) (3) Data frame handling\nI1005 16:59:21.726634 262 log.go:181] (0xc000f94dc0) Data frame received for 5\nI1005 16:59:21.726660 262 log.go:181] (0xc00012d040) (5) Data frame handling\nI1005 16:59:21.728361 262 log.go:181] (0xc000f94dc0) Data frame received for 1\nI1005 16:59:21.728399 262 log.go:181] (0xc00094dae0) (1) Data frame handling\nI1005 16:59:21.728434 262 log.go:181] (0xc00094dae0) (1) Data frame sent\nI1005 16:59:21.728466 262 log.go:181] (0xc000f94dc0) (0xc00094dae0) Stream removed, broadcasting: 1\nI1005 16:59:21.728497 262 log.go:181] (0xc000f94dc0) Go away received\nI1005 16:59:21.728792 262 log.go:181] (0xc000f94dc0) (0xc00094dae0) Stream removed, broadcasting: 1\nI1005 16:59:21.728805 262 log.go:181] (0xc000f94dc0) (0xc00012c280) Stream removed, broadcasting: 3\nI1005 16:59:21.728811 262 log.go:181] (0xc000f94dc0) (0xc00012d040) Stream removed, broadcasting: 5\n" Oct 5 16:59:21.733: INFO: stdout: "" Oct 5 16:59:21.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 execpod-affinitycgx6k -- /bin/sh -x -c nc -zv -t -w 2 10.103.60.34 80' Oct 5 16:59:21.955: INFO: stderr: "I1005 16:59:21.876372 281 log.go:181] (0xc0000d6000) (0xc0007cc000) Create stream\nI1005 16:59:21.876433 281 log.go:181] (0xc0000d6000) (0xc0007cc000) Stream added, broadcasting: 1\nI1005 16:59:21.878676 281 log.go:181] (0xc0000d6000) Reply frame received for 1\nI1005 16:59:21.878717 281 log.go:181] (0xc0000d6000) (0xc000a885a0) Create stream\nI1005 16:59:21.878730 281 log.go:181] (0xc0000d6000) (0xc000a885a0) Stream added, broadcasting: 3\nI1005 16:59:21.879898 281 log.go:181] (0xc0000d6000) Reply frame received for 3\nI1005 16:59:21.879941 281 log.go:181] (0xc0000d6000) (0xc0007cc0a0) Create stream\nI1005 16:59:21.879955 281 log.go:181] (0xc0000d6000) (0xc0007cc0a0) Stream added, broadcasting: 5\nI1005 16:59:21.881143 281 log.go:181] (0xc0000d6000) Reply frame received for 5\nI1005 16:59:21.947191 281 log.go:181] (0xc0000d6000) Data frame received for 5\nI1005 16:59:21.947251 281 log.go:181] (0xc0007cc0a0) (5) Data frame handling\nI1005 16:59:21.947287 281 log.go:181] (0xc0007cc0a0) (5) Data frame sent\nI1005 16:59:21.947310 281 log.go:181] (0xc0000d6000) Data frame received for 5\n+ nc -zv -t -w 2 10.103.60.34 80\nConnection to 10.103.60.34 80 port [tcp/http] succeeded!\nI1005 16:59:21.947329 281 log.go:181] (0xc0007cc0a0) (5) Data frame handling\nI1005 16:59:21.947363 281 log.go:181] (0xc0000d6000) Data frame received for 3\nI1005 16:59:21.947398 281 log.go:181] (0xc000a885a0) (3) Data frame handling\nI1005 16:59:21.948669 281 log.go:181] (0xc0000d6000) Data frame received for 1\nI1005 16:59:21.948712 281 log.go:181] (0xc0007cc000) (1) Data frame handling\nI1005 16:59:21.948738 281 log.go:181] (0xc0007cc000) (1) Data frame sent\nI1005 16:59:21.948753 281 log.go:181] (0xc0000d6000) (0xc0007cc000) Stream removed, broadcasting: 1\nI1005 16:59:21.948810 281 log.go:181] (0xc0000d6000) Go away received\nI1005 16:59:21.949371 281 log.go:181] (0xc0000d6000) (0xc0007cc000) Stream removed, broadcasting: 1\nI1005 16:59:21.949394 281 log.go:181] (0xc0000d6000) (0xc000a885a0) Stream removed, broadcasting: 3\nI1005 16:59:21.949406 281 log.go:181] (0xc0000d6000) (0xc0007cc0a0) Stream removed, broadcasting: 5\n" Oct 5 16:59:21.955: INFO: stdout: "" Oct 5 16:59:21.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 execpod-affinitycgx6k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.60.34:80/ ; done' Oct 5 16:59:22.266: INFO: stderr: "I1005 16:59:22.090463 300 log.go:181] (0xc000a29290) (0xc00081e5a0) Create stream\nI1005 16:59:22.090527 300 log.go:181] (0xc000a29290) (0xc00081e5a0) Stream added, broadcasting: 1\nI1005 16:59:22.098989 300 log.go:181] (0xc000a29290) Reply frame received for 1\nI1005 16:59:22.099030 300 log.go:181] (0xc000a29290) (0xc000312140) Create stream\nI1005 16:59:22.099039 300 log.go:181] (0xc000a29290) (0xc000312140) Stream added, broadcasting: 3\nI1005 16:59:22.100031 300 log.go:181] (0xc000a29290) Reply frame received for 3\nI1005 16:59:22.100067 300 log.go:181] (0xc000a29290) (0xc0001aa280) Create stream\nI1005 16:59:22.100080 300 log.go:181] (0xc000a29290) (0xc0001aa280) Stream added, broadcasting: 5\nI1005 16:59:22.101039 300 log.go:181] (0xc000a29290) Reply frame received for 5\nI1005 16:59:22.153480 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.153519 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.153533 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.153547 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.153558 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.153575 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.160027 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.160053 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.160072 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.160575 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.160615 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.160634 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.160654 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.160669 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.160693 300 log.go:181] (0xc0001aa280) (5) Data frame sent\nI1005 16:59:22.160709 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.160720 300 log.go:181] (0xc0001aa280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.160751 300 log.go:181] (0xc0001aa280) (5) Data frame sent\nI1005 16:59:22.168701 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.168723 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.168742 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.169546 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.169565 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.169575 300 log.go:181] (0xc0001aa280) (5) Data frame sent\nI1005 16:59:22.169583 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.169595 300 log.go:181] (0xc0001aa280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1005 16:59:22.169618 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.169628 300 log.go:181] (0xc000312140) (3) Data frame handling\n http://10.103.60.34:80/\nI1005 16:59:22.169648 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.169678 300 log.go:181] (0xc0001aa280) (5) Data frame sent\nI1005 16:59:22.176645 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.176668 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.176687 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.177577 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.177650 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.177665 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.177684 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.177695 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.177705 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.181410 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.181443 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.181476 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.181544 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.181560 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.181567 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.181659 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.181694 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.181720 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.188589 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.188616 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.188636 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.189547 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.189571 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.189586 300 log.go:181] (0xc0001aa280) (5) Data frame sent\nI1005 16:59:22.189598 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.189609 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.189620 300 log.go:181] (0xc000312140) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.196975 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.197003 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.197018 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.197697 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.197742 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.197762 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.197789 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.197808 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.197838 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.205295 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.205329 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.205359 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.205982 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.206000 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.206011 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.206026 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.206047 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.206065 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.209605 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.209619 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.209625 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.210387 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.210404 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.210424 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.210455 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.210468 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.210487 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.216401 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.216422 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.216433 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.217516 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.217542 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.217550 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.217573 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.217598 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.217613 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.223262 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.223295 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.223328 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.223572 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.223596 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.223608 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.223632 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.223679 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.223706 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.227580 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.227611 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.227630 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.228219 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.228253 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.228273 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.228302 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.228316 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.228333 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.234001 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.234041 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.234091 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.234121 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.234144 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.234164 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.234223 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.234256 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.234273 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.239455 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.239475 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.239489 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.240477 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.240512 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.240529 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.240556 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.240575 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.240594 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.245520 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.245549 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.245572 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.246063 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.246080 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.246089 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.246112 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.246125 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.246135 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.251294 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.251316 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.251336 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.251845 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.251860 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.251868 300 log.go:181] (0xc0001aa280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.251908 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.251919 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.251930 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.257092 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.257116 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.257136 300 log.go:181] (0xc000312140) (3) Data frame sent\nI1005 16:59:22.257836 300 log.go:181] (0xc000a29290) Data frame received for 3\nI1005 16:59:22.257868 300 log.go:181] (0xc000312140) (3) Data frame handling\nI1005 16:59:22.257918 300 log.go:181] (0xc000a29290) Data frame received for 5\nI1005 16:59:22.257960 300 log.go:181] (0xc0001aa280) (5) Data frame handling\nI1005 16:59:22.259681 300 log.go:181] (0xc000a29290) Data frame received for 1\nI1005 16:59:22.259716 300 log.go:181] (0xc00081e5a0) (1) Data frame handling\nI1005 16:59:22.259737 300 log.go:181] (0xc00081e5a0) (1) Data frame sent\nI1005 16:59:22.259761 300 log.go:181] (0xc000a29290) (0xc00081e5a0) Stream removed, broadcasting: 1\nI1005 16:59:22.259804 300 log.go:181] (0xc000a29290) Go away received\nI1005 16:59:22.260293 300 log.go:181] (0xc000a29290) (0xc00081e5a0) Stream removed, broadcasting: 1\nI1005 16:59:22.260317 300 log.go:181] (0xc000a29290) (0xc000312140) Stream removed, broadcasting: 3\nI1005 16:59:22.260332 300 log.go:181] (0xc000a29290) (0xc0001aa280) Stream removed, broadcasting: 5\n" Oct 5 16:59:22.267: INFO: stdout: "\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642\naffinity-clusterip-timeout-dx642" Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Received response from host: affinity-clusterip-timeout-dx642 Oct 5 16:59:22.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 execpod-affinitycgx6k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.103.60.34:80/' Oct 5 16:59:22.508: INFO: stderr: "I1005 16:59:22.402967 318 log.go:181] (0xc000a45340) (0xc000a608c0) Create stream\nI1005 16:59:22.403028 318 log.go:181] (0xc000a45340) (0xc000a608c0) Stream added, broadcasting: 1\nI1005 16:59:22.407699 318 log.go:181] (0xc000a45340) Reply frame received for 1\nI1005 16:59:22.407731 318 log.go:181] (0xc000a45340) (0xc000a60000) Create stream\nI1005 16:59:22.407740 318 log.go:181] (0xc000a45340) (0xc000a60000) Stream added, broadcasting: 3\nI1005 16:59:22.408488 318 log.go:181] (0xc000a45340) Reply frame received for 3\nI1005 16:59:22.408536 318 log.go:181] (0xc000a45340) (0xc00088a140) Create stream\nI1005 16:59:22.408552 318 log.go:181] (0xc000a45340) (0xc00088a140) Stream added, broadcasting: 5\nI1005 16:59:22.409418 318 log.go:181] (0xc000a45340) Reply frame received for 5\nI1005 16:59:22.499272 318 log.go:181] (0xc000a45340) Data frame received for 5\nI1005 16:59:22.499308 318 log.go:181] (0xc00088a140) (5) Data frame handling\nI1005 16:59:22.499335 318 log.go:181] (0xc00088a140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:22.500482 318 log.go:181] (0xc000a45340) Data frame received for 3\nI1005 16:59:22.500504 318 log.go:181] (0xc000a60000) (3) Data frame handling\nI1005 16:59:22.500516 318 log.go:181] (0xc000a60000) (3) Data frame sent\nI1005 16:59:22.501022 318 log.go:181] (0xc000a45340) Data frame received for 3\nI1005 16:59:22.501042 318 log.go:181] (0xc000a60000) (3) Data frame handling\nI1005 16:59:22.501068 318 log.go:181] (0xc000a45340) Data frame received for 5\nI1005 16:59:22.501076 318 log.go:181] (0xc00088a140) (5) Data frame handling\nI1005 16:59:22.502820 318 log.go:181] (0xc000a45340) Data frame received for 1\nI1005 16:59:22.502840 318 log.go:181] (0xc000a608c0) (1) Data frame handling\nI1005 16:59:22.502850 318 log.go:181] (0xc000a608c0) (1) Data frame sent\nI1005 16:59:22.502862 318 log.go:181] (0xc000a45340) (0xc000a608c0) Stream removed, broadcasting: 1\nI1005 16:59:22.502874 318 log.go:181] (0xc000a45340) Go away received\nI1005 16:59:22.503265 318 log.go:181] (0xc000a45340) (0xc000a608c0) Stream removed, broadcasting: 1\nI1005 16:59:22.503283 318 log.go:181] (0xc000a45340) (0xc000a60000) Stream removed, broadcasting: 3\nI1005 16:59:22.503291 318 log.go:181] (0xc000a45340) (0xc00088a140) Stream removed, broadcasting: 5\n" Oct 5 16:59:22.508: INFO: stdout: "affinity-clusterip-timeout-dx642" Oct 5 16:59:37.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1470 execpod-affinitycgx6k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.103.60.34:80/' Oct 5 16:59:37.747: INFO: stderr: "I1005 16:59:37.644201 336 log.go:181] (0xc00003a8f0) (0xc0009ae320) Create stream\nI1005 16:59:37.644275 336 log.go:181] (0xc00003a8f0) (0xc0009ae320) Stream added, broadcasting: 1\nI1005 16:59:37.646006 336 log.go:181] (0xc00003a8f0) Reply frame received for 1\nI1005 16:59:37.646049 336 log.go:181] (0xc00003a8f0) (0xc000211e00) Create stream\nI1005 16:59:37.646060 336 log.go:181] (0xc00003a8f0) (0xc000211e00) Stream added, broadcasting: 3\nI1005 16:59:37.646877 336 log.go:181] (0xc00003a8f0) Reply frame received for 3\nI1005 16:59:37.646905 336 log.go:181] (0xc00003a8f0) (0xc0009ae3c0) Create stream\nI1005 16:59:37.646913 336 log.go:181] (0xc00003a8f0) (0xc0009ae3c0) Stream added, broadcasting: 5\nI1005 16:59:37.647851 336 log.go:181] (0xc00003a8f0) Reply frame received for 5\nI1005 16:59:37.738487 336 log.go:181] (0xc00003a8f0) Data frame received for 5\nI1005 16:59:37.738508 336 log.go:181] (0xc0009ae3c0) (5) Data frame handling\nI1005 16:59:37.738519 336 log.go:181] (0xc0009ae3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.60.34:80/\nI1005 16:59:37.739025 336 log.go:181] (0xc00003a8f0) Data frame received for 3\nI1005 16:59:37.739042 336 log.go:181] (0xc000211e00) (3) Data frame handling\nI1005 16:59:37.739058 336 log.go:181] (0xc000211e00) (3) Data frame sent\nI1005 16:59:37.739711 336 log.go:181] (0xc00003a8f0) Data frame received for 3\nI1005 16:59:37.739746 336 log.go:181] (0xc000211e00) (3) Data frame handling\nI1005 16:59:37.739768 336 log.go:181] (0xc00003a8f0) Data frame received for 5\nI1005 16:59:37.739778 336 log.go:181] (0xc0009ae3c0) (5) Data frame handling\nI1005 16:59:37.742007 336 log.go:181] (0xc00003a8f0) Data frame received for 1\nI1005 16:59:37.742033 336 log.go:181] (0xc0009ae320) (1) Data frame handling\nI1005 16:59:37.742047 336 log.go:181] (0xc0009ae320) (1) Data frame sent\nI1005 16:59:37.742064 336 log.go:181] (0xc00003a8f0) (0xc0009ae320) Stream removed, broadcasting: 1\nI1005 16:59:37.742078 336 log.go:181] (0xc00003a8f0) Go away received\nI1005 16:59:37.742433 336 log.go:181] (0xc00003a8f0) (0xc0009ae320) Stream removed, broadcasting: 1\nI1005 16:59:37.742447 336 log.go:181] (0xc00003a8f0) (0xc000211e00) Stream removed, broadcasting: 3\nI1005 16:59:37.742452 336 log.go:181] (0xc00003a8f0) (0xc0009ae3c0) Stream removed, broadcasting: 5\n" Oct 5 16:59:37.747: INFO: stdout: "affinity-clusterip-timeout-bch86" Oct 5 16:59:37.747: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1470, will wait for the garbage collector to delete the pods Oct 5 16:59:38.104: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 184.309028ms Oct 5 16:59:38.504: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.182525ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 16:59:49.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1470" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:50.028 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":27,"skipped":490,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 16:59:49.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8982 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8982 STEP: Creating statefulset with conflicting port in namespace statefulset-8982 STEP: Waiting until pod test-pod will start running in namespace statefulset-8982 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8982 Oct 5 16:59:56.166: INFO: Observed stateful pod in namespace: statefulset-8982, name: ss-0, uid: 12bb3d5f-884a-40c1-9a7b-5b69ae3fb533, status phase: Pending. Waiting for statefulset controller to delete. Oct 5 16:59:56.195: INFO: Observed stateful pod in namespace: statefulset-8982, name: ss-0, uid: 12bb3d5f-884a-40c1-9a7b-5b69ae3fb533, status phase: Failed. Waiting for statefulset controller to delete. Oct 5 16:59:56.216: INFO: Observed stateful pod in namespace: statefulset-8982, name: ss-0, uid: 12bb3d5f-884a-40c1-9a7b-5b69ae3fb533, status phase: Failed. Waiting for statefulset controller to delete. Oct 5 16:59:56.221: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8982 STEP: Removing pod with conflicting port in namespace statefulset-8982 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8982 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 17:00:02.303: INFO: Deleting all statefulset in ns statefulset-8982 Oct 5 17:00:02.306: INFO: Scaling statefulset ss to 0 Oct 5 17:00:12.342: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 17:00:12.345: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:12.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8982" for this suite. • [SLOW TEST:22.374 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":28,"skipped":491,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:12.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-2b05bf3d-59a6-45ba-a758-a2d509bd1787 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:18.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2289" for this suite. • [SLOW TEST:6.138 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":498,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:18.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Oct 5 17:00:18.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6086' Oct 5 17:00:18.846: INFO: stderr: "" Oct 5 17:00:18.846: INFO: stdout: "pod/pause created\n" Oct 5 17:00:18.846: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 5 17:00:18.846: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6086" to be "running and ready" Oct 5 17:00:18.875: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 28.759747ms Oct 5 17:00:20.881: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034280744s Oct 5 17:00:22.885: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.038859719s Oct 5 17:00:22.885: INFO: Pod "pause" satisfied condition "running and ready" Oct 5 17:00:22.885: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Oct 5 17:00:22.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6086' Oct 5 17:00:23.002: INFO: stderr: "" Oct 5 17:00:23.002: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 5 17:00:23.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6086' Oct 5 17:00:23.099: INFO: stderr: "" Oct 5 17:00:23.099: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 5 17:00:23.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6086' Oct 5 17:00:23.208: INFO: stderr: "" Oct 5 17:00:23.208: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 5 17:00:23.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6086' Oct 5 17:00:23.343: INFO: stderr: "" Oct 5 17:00:23.343: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Oct 5 17:00:23.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6086' Oct 5 17:00:23.472: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:00:23.472: INFO: stdout: "pod \"pause\" force deleted\n" Oct 5 17:00:23.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6086' Oct 5 17:00:23.581: INFO: stderr: "No resources found in kubectl-6086 namespace.\n" Oct 5 17:00:23.581: INFO: stdout: "" Oct 5 17:00:23.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6086 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 17:00:23.825: INFO: stderr: "" Oct 5 17:00:23.825: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:23.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6086" for this suite. • [SLOW TEST:5.347 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":30,"skipped":510,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:23.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:00:25.779: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:00:27.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514026, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:00:29.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514026, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514025, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:00:32.841: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:32.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9355" for this suite. STEP: Destroying namespace "webhook-9355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.151 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":31,"skipped":517,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:33.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-d9d6844a-0828-46b7-b914-c44146152a8d [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:33.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5867" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":32,"skipped":530,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:33.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Oct 5 17:00:33.166: INFO: Major version: 1 STEP: Confirm minor version Oct 5 17:00:33.166: INFO: cleanMinorVersion: 19 Oct 5 17:00:33.166: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:33.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-2166" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":33,"skipped":544,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:33.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:00:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5477" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":34,"skipped":560,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:00:33.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1005 17:01:13.908726 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 17:02:15.931: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 5 17:02:15.931: INFO: Deleting pod "simpletest.rc-4lp8h" in namespace "gc-4674" Oct 5 17:02:15.979: INFO: Deleting pod "simpletest.rc-b765f" in namespace "gc-4674" Oct 5 17:02:16.066: INFO: Deleting pod "simpletest.rc-f67qd" in namespace "gc-4674" Oct 5 17:02:16.400: INFO: Deleting pod "simpletest.rc-fnznp" in namespace "gc-4674" Oct 5 17:02:16.653: INFO: Deleting pod "simpletest.rc-hqdzk" in namespace "gc-4674" Oct 5 17:02:16.850: INFO: Deleting pod "simpletest.rc-jndl2" in namespace "gc-4674" Oct 5 17:02:17.335: INFO: Deleting pod "simpletest.rc-mg27k" in namespace "gc-4674" Oct 5 17:02:17.714: INFO: Deleting pod "simpletest.rc-rpzhj" in namespace "gc-4674" Oct 5 17:02:17.839: INFO: Deleting pod "simpletest.rc-x874c" in namespace "gc-4674" Oct 5 17:02:18.264: INFO: Deleting pod "simpletest.rc-xh86g" in namespace "gc-4674" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:18.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4674" for this suite. • [SLOW TEST:104.895 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":35,"skipped":573,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:18.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:02:19.566: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:02:21.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514140, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514140, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514140, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514139, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:02:24.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:25.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1388" for this suite. STEP: Destroying namespace "webhook-1388-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.899 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":36,"skipped":580,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:25.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 5 17:02:25.368: INFO: Waiting up to 5m0s for pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214" in namespace "emptydir-1628" to be "Succeeded or Failed" Oct 5 17:02:25.382: INFO: Pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214": Phase="Pending", Reason="", readiness=false. Elapsed: 14.09756ms Oct 5 17:02:27.611: INFO: Pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242848484s Oct 5 17:02:29.616: INFO: Pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214": Phase="Running", Reason="", readiness=true. Elapsed: 4.247969098s Oct 5 17:02:31.619: INFO: Pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.251444712s STEP: Saw pod success Oct 5 17:02:31.619: INFO: Pod "pod-7415efcb-7138-404a-afa2-dd9aafa73214" satisfied condition "Succeeded or Failed" Oct 5 17:02:31.622: INFO: Trying to get logs from node latest-worker pod pod-7415efcb-7138-404a-afa2-dd9aafa73214 container test-container: STEP: delete the pod Oct 5 17:02:31.675: INFO: Waiting for pod pod-7415efcb-7138-404a-afa2-dd9aafa73214 to disappear Oct 5 17:02:31.695: INFO: Pod pod-7415efcb-7138-404a-afa2-dd9aafa73214 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:31.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1628" for this suite. • [SLOW TEST:6.449 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":37,"skipped":583,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:31.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:35.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6843" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":593,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:35.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 17:02:40.463: INFO: Successfully updated pod "annotationupdateb5d21ff2-d8b3-4894-9474-a2a83da68a72" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:42.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1810" for this suite. • [SLOW TEST:6.674 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:42.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:02:43.697: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:02:45.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:02:47.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514163, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:02:50.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:02:50.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-983" for this suite. STEP: Destroying namespace "webhook-983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.637 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":40,"skipped":625,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:02:51.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 5 17:03:01.228: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:01.346: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 17:03:03.346: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:03.350: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 17:03:05.356: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:05.362: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 17:03:07.346: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:07.351: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 17:03:09.346: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:09.350: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 17:03:11.346: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 17:03:13.483: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:03:13.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7180" for this suite. • [SLOW TEST:22.992 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:03:14.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:03:14.236: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 5 17:03:16.386: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:03:17.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9495" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":42,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:03:17.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-9064b8bb-7fa9-4a93-8a32-ff16ea105fa9 STEP: Creating secret with name secret-projected-all-test-volume-34dfd250-f557-41f5-bca5-3970baa38e94 STEP: Creating a pod to test Check all projections for projected volume plugin Oct 5 17:03:18.035: INFO: Waiting up to 5m0s for pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb" in namespace "projected-2622" to be "Succeeded or Failed" Oct 5 17:03:18.220: INFO: Pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb": Phase="Pending", Reason="", readiness=false. Elapsed: 185.129532ms Oct 5 17:03:20.310: INFO: Pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2748117s Oct 5 17:03:22.321: INFO: Pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285552944s Oct 5 17:03:24.325: INFO: Pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.289837941s STEP: Saw pod success Oct 5 17:03:24.325: INFO: Pod "projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb" satisfied condition "Succeeded or Failed" Oct 5 17:03:24.327: INFO: Trying to get logs from node latest-worker pod projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb container projected-all-volume-test: STEP: delete the pod Oct 5 17:03:24.355: INFO: Waiting for pod projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb to disappear Oct 5 17:03:24.361: INFO: Pod projected-volume-e13a8576-2fc7-43f1-a27d-77721596efdb no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:03:24.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2622" for this suite. • [SLOW TEST:6.835 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":43,"skipped":676,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:03:24.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:03:24.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2" in namespace "projected-2194" to be "Succeeded or Failed" Oct 5 17:03:24.476: INFO: Pod "downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.837915ms Oct 5 17:03:26.480: INFO: Pod "downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012858236s Oct 5 17:03:28.483: INFO: Pod "downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016731875s STEP: Saw pod success Oct 5 17:03:28.483: INFO: Pod "downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2" satisfied condition "Succeeded or Failed" Oct 5 17:03:28.486: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2 container client-container: STEP: delete the pod Oct 5 17:03:28.531: INFO: Waiting for pod downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2 to disappear Oct 5 17:03:28.543: INFO: Pod downwardapi-volume-04d9b673-aceb-481d-a5a9-cc8f2c8d00f2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:03:28.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2194" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":678,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:03:28.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2529 STEP: creating service affinity-nodeport in namespace services-2529 STEP: creating replication controller affinity-nodeport in namespace services-2529 I1005 17:03:29.035905 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2529, replica count: 3 I1005 17:03:32.086325 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:03:35.086665 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:03:35.098: INFO: Creating new exec pod Oct 5 17:03:40.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2529 execpod-affinitygwr8g -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Oct 5 17:03:40.367: INFO: stderr: "I1005 17:03:40.277920 501 log.go:181] (0xc00003a420) (0xc000cd88c0) Create stream\nI1005 17:03:40.277986 501 log.go:181] (0xc00003a420) (0xc000cd88c0) Stream added, broadcasting: 1\nI1005 17:03:40.280406 501 log.go:181] (0xc00003a420) Reply frame received for 1\nI1005 17:03:40.280450 501 log.go:181] (0xc00003a420) (0xc000b98000) Create stream\nI1005 17:03:40.280462 501 log.go:181] (0xc00003a420) (0xc000b98000) Stream added, broadcasting: 3\nI1005 17:03:40.281614 501 log.go:181] (0xc00003a420) Reply frame received for 3\nI1005 17:03:40.281661 501 log.go:181] (0xc00003a420) (0xc000b98140) Create stream\nI1005 17:03:40.281676 501 log.go:181] (0xc00003a420) (0xc000b98140) Stream added, broadcasting: 5\nI1005 17:03:40.282631 501 log.go:181] (0xc00003a420) Reply frame received for 5\nI1005 17:03:40.357750 501 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:03:40.357784 501 log.go:181] (0xc000b98140) (5) Data frame handling\nI1005 17:03:40.357805 501 log.go:181] (0xc000b98140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI1005 17:03:40.358725 501 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:03:40.358762 501 log.go:181] (0xc000b98140) (5) Data frame handling\nI1005 17:03:40.358799 501 log.go:181] (0xc000b98140) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1005 17:03:40.359087 501 log.go:181] (0xc00003a420) Data frame received for 3\nI1005 17:03:40.359128 501 log.go:181] (0xc000b98000) (3) Data frame handling\nI1005 17:03:40.359167 501 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:03:40.359263 501 log.go:181] (0xc000b98140) (5) Data frame handling\nI1005 17:03:40.360801 501 log.go:181] (0xc00003a420) Data frame received for 1\nI1005 17:03:40.360919 501 log.go:181] (0xc000cd88c0) (1) Data frame handling\nI1005 17:03:40.360949 501 log.go:181] (0xc000cd88c0) (1) Data frame sent\nI1005 17:03:40.360964 501 log.go:181] (0xc00003a420) (0xc000cd88c0) Stream removed, broadcasting: 1\nI1005 17:03:40.361213 501 log.go:181] (0xc00003a420) Go away received\nI1005 17:03:40.361448 501 log.go:181] (0xc00003a420) (0xc000cd88c0) Stream removed, broadcasting: 1\nI1005 17:03:40.361478 501 log.go:181] (0xc00003a420) (0xc000b98000) Stream removed, broadcasting: 3\nI1005 17:03:40.361498 501 log.go:181] (0xc00003a420) (0xc000b98140) Stream removed, broadcasting: 5\n" Oct 5 17:03:40.367: INFO: stdout: "" Oct 5 17:03:40.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2529 execpod-affinitygwr8g -- /bin/sh -x -c nc -zv -t -w 2 10.105.231.91 80' Oct 5 17:03:40.597: INFO: stderr: "I1005 17:03:40.506645 519 log.go:181] (0xc000d3edc0) (0xc000f2a5a0) Create stream\nI1005 17:03:40.506698 519 log.go:181] (0xc000d3edc0) (0xc000f2a5a0) Stream added, broadcasting: 1\nI1005 17:03:40.511622 519 log.go:181] (0xc000d3edc0) Reply frame received for 1\nI1005 17:03:40.511659 519 log.go:181] (0xc000d3edc0) (0xc000f2a000) Create stream\nI1005 17:03:40.511668 519 log.go:181] (0xc000d3edc0) (0xc000f2a000) Stream added, broadcasting: 3\nI1005 17:03:40.512655 519 log.go:181] (0xc000d3edc0) Reply frame received for 3\nI1005 17:03:40.512704 519 log.go:181] (0xc000d3edc0) (0xc00091cd20) Create stream\nI1005 17:03:40.512721 519 log.go:181] (0xc000d3edc0) (0xc00091cd20) Stream added, broadcasting: 5\nI1005 17:03:40.513944 519 log.go:181] (0xc000d3edc0) Reply frame received for 5\nI1005 17:03:40.588159 519 log.go:181] (0xc000d3edc0) Data frame received for 3\nI1005 17:03:40.588201 519 log.go:181] (0xc000f2a000) (3) Data frame handling\nI1005 17:03:40.588228 519 log.go:181] (0xc000d3edc0) Data frame received for 5\nI1005 17:03:40.588240 519 log.go:181] (0xc00091cd20) (5) Data frame handling\nI1005 17:03:40.588252 519 log.go:181] (0xc00091cd20) (5) Data frame sent\nI1005 17:03:40.588263 519 log.go:181] (0xc000d3edc0) Data frame received for 5\nI1005 17:03:40.588277 519 log.go:181] (0xc00091cd20) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.231.91 80\nConnection to 10.105.231.91 80 port [tcp/http] succeeded!\nI1005 17:03:40.589926 519 log.go:181] (0xc000d3edc0) Data frame received for 1\nI1005 17:03:40.589948 519 log.go:181] (0xc000f2a5a0) (1) Data frame handling\nI1005 17:03:40.589967 519 log.go:181] (0xc000f2a5a0) (1) Data frame sent\nI1005 17:03:40.590111 519 log.go:181] (0xc000d3edc0) (0xc000f2a5a0) Stream removed, broadcasting: 1\nI1005 17:03:40.590130 519 log.go:181] (0xc000d3edc0) Go away received\nI1005 17:03:40.590604 519 log.go:181] (0xc000d3edc0) (0xc000f2a5a0) Stream removed, broadcasting: 1\nI1005 17:03:40.590629 519 log.go:181] (0xc000d3edc0) (0xc000f2a000) Stream removed, broadcasting: 3\nI1005 17:03:40.590641 519 log.go:181] (0xc000d3edc0) (0xc00091cd20) Stream removed, broadcasting: 5\n" Oct 5 17:03:40.597: INFO: stdout: "" Oct 5 17:03:40.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2529 execpod-affinitygwr8g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32072' Oct 5 17:03:40.807: INFO: stderr: "I1005 17:03:40.728176 537 log.go:181] (0xc00055d1e0) (0xc000554960) Create stream\nI1005 17:03:40.728223 537 log.go:181] (0xc00055d1e0) (0xc000554960) Stream added, broadcasting: 1\nI1005 17:03:40.730501 537 log.go:181] (0xc00055d1e0) Reply frame received for 1\nI1005 17:03:40.730547 537 log.go:181] (0xc00055d1e0) (0xc00013c280) Create stream\nI1005 17:03:40.730569 537 log.go:181] (0xc00055d1e0) (0xc00013c280) Stream added, broadcasting: 3\nI1005 17:03:40.731415 537 log.go:181] (0xc00055d1e0) Reply frame received for 3\nI1005 17:03:40.731460 537 log.go:181] (0xc00055d1e0) (0xc000c2c000) Create stream\nI1005 17:03:40.731478 537 log.go:181] (0xc00055d1e0) (0xc000c2c000) Stream added, broadcasting: 5\nI1005 17:03:40.732188 537 log.go:181] (0xc00055d1e0) Reply frame received for 5\nI1005 17:03:40.799502 537 log.go:181] (0xc00055d1e0) Data frame received for 3\nI1005 17:03:40.799554 537 log.go:181] (0xc00013c280) (3) Data frame handling\nI1005 17:03:40.799590 537 log.go:181] (0xc00055d1e0) Data frame received for 5\nI1005 17:03:40.799606 537 log.go:181] (0xc000c2c000) (5) Data frame handling\nI1005 17:03:40.799621 537 log.go:181] (0xc000c2c000) (5) Data frame sent\nI1005 17:03:40.799635 537 log.go:181] (0xc00055d1e0) Data frame received for 5\nI1005 17:03:40.799647 537 log.go:181] (0xc000c2c000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 32072\nConnection to 172.18.0.15 32072 port [tcp/32072] succeeded!\nI1005 17:03:40.801204 537 log.go:181] (0xc00055d1e0) Data frame received for 1\nI1005 17:03:40.801231 537 log.go:181] (0xc000554960) (1) Data frame handling\nI1005 17:03:40.801239 537 log.go:181] (0xc000554960) (1) Data frame sent\nI1005 17:03:40.801261 537 log.go:181] (0xc00055d1e0) (0xc000554960) Stream removed, broadcasting: 1\nI1005 17:03:40.801326 537 log.go:181] (0xc00055d1e0) Go away received\nI1005 17:03:40.801589 537 log.go:181] (0xc00055d1e0) (0xc000554960) Stream removed, broadcasting: 1\nI1005 17:03:40.801607 537 log.go:181] (0xc00055d1e0) (0xc00013c280) Stream removed, broadcasting: 3\nI1005 17:03:40.801617 537 log.go:181] (0xc00055d1e0) (0xc000c2c000) Stream removed, broadcasting: 5\n" Oct 5 17:03:40.807: INFO: stdout: "" Oct 5 17:03:40.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2529 execpod-affinitygwr8g -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32072' Oct 5 17:03:41.034: INFO: stderr: "I1005 17:03:40.943214 555 log.go:181] (0xc00003a420) (0xc00087e000) Create stream\nI1005 17:03:40.943274 555 log.go:181] (0xc00003a420) (0xc00087e000) Stream added, broadcasting: 1\nI1005 17:03:40.946814 555 log.go:181] (0xc00003a420) Reply frame received for 1\nI1005 17:03:40.946868 555 log.go:181] (0xc00003a420) (0xc000ee6000) Create stream\nI1005 17:03:40.946881 555 log.go:181] (0xc00003a420) (0xc000ee6000) Stream added, broadcasting: 3\nI1005 17:03:40.949930 555 log.go:181] (0xc00003a420) Reply frame received for 3\nI1005 17:03:40.949950 555 log.go:181] (0xc00003a420) (0xc000ee6140) Create stream\nI1005 17:03:40.949957 555 log.go:181] (0xc00003a420) (0xc000ee6140) Stream added, broadcasting: 5\nI1005 17:03:40.950969 555 log.go:181] (0xc00003a420) Reply frame received for 5\nI1005 17:03:41.024710 555 log.go:181] (0xc00003a420) Data frame received for 3\nI1005 17:03:41.024755 555 log.go:181] (0xc000ee6000) (3) Data frame handling\nI1005 17:03:41.024950 555 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:03:41.024968 555 log.go:181] (0xc000ee6140) (5) Data frame handling\nI1005 17:03:41.024992 555 log.go:181] (0xc000ee6140) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.16 32072\nConnection to 172.18.0.16 32072 port [tcp/32072] succeeded!\nI1005 17:03:41.025113 555 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:03:41.025136 555 log.go:181] (0xc000ee6140) (5) Data frame handling\nI1005 17:03:41.027121 555 log.go:181] (0xc00003a420) Data frame received for 1\nI1005 17:03:41.027147 555 log.go:181] (0xc00087e000) (1) Data frame handling\nI1005 17:03:41.027164 555 log.go:181] (0xc00087e000) (1) Data frame sent\nI1005 17:03:41.027172 555 log.go:181] (0xc00003a420) (0xc00087e000) Stream removed, broadcasting: 1\nI1005 17:03:41.027181 555 log.go:181] (0xc00003a420) Go away received\nI1005 17:03:41.027579 555 log.go:181] (0xc00003a420) (0xc00087e000) Stream removed, broadcasting: 1\nI1005 17:03:41.027605 555 log.go:181] (0xc00003a420) (0xc000ee6000) Stream removed, broadcasting: 3\nI1005 17:03:41.027620 555 log.go:181] (0xc00003a420) (0xc000ee6140) Stream removed, broadcasting: 5\n" Oct 5 17:03:41.034: INFO: stdout: "" Oct 5 17:03:41.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2529 execpod-affinitygwr8g -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:32072/ ; done' Oct 5 17:03:41.351: INFO: stderr: "I1005 17:03:41.171038 573 log.go:181] (0xc000212000) (0xc000f0e000) Create stream\nI1005 17:03:41.171099 573 log.go:181] (0xc000212000) (0xc000f0e000) Stream added, broadcasting: 1\nI1005 17:03:41.173253 573 log.go:181] (0xc000212000) Reply frame received for 1\nI1005 17:03:41.173299 573 log.go:181] (0xc000212000) (0xc000f0e0a0) Create stream\nI1005 17:03:41.173313 573 log.go:181] (0xc000212000) (0xc000f0e0a0) Stream added, broadcasting: 3\nI1005 17:03:41.174319 573 log.go:181] (0xc000212000) Reply frame received for 3\nI1005 17:03:41.174357 573 log.go:181] (0xc000212000) (0xc000f0e140) Create stream\nI1005 17:03:41.174370 573 log.go:181] (0xc000212000) (0xc000f0e140) Stream added, broadcasting: 5\nI1005 17:03:41.175428 573 log.go:181] (0xc000212000) Reply frame received for 5\nI1005 17:03:41.244724 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.244766 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.244782 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.244804 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.244823 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.244949 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.252461 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.252493 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.252518 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.253131 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.253148 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.253155 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.253166 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.253172 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.253179 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.258898 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.258922 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.258945 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.259284 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.259309 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.259319 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.259333 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.259341 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.259349 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.265596 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.265616 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.265632 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.266150 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.266175 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.266191 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.266213 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.266227 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.266243 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.271006 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.271025 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.271053 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.271527 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.271553 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.271564 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.271588 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.271616 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.271642 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.276143 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.276168 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.276187 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.276582 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.276597 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.276605 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.276617 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.276630 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.276639 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.281476 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.281491 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.281500 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.282121 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.282148 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.282169 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.282180 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.282189 573 log.go:181] (0xc000f0e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.282217 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.282312 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.282336 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.282359 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.286296 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.286319 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.286330 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.286857 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.286893 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.286906 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.286919 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.286929 573 log.go:181] (0xc000f0e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.286953 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.286983 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.287008 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.287046 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.294256 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.294287 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.294314 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.295126 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.295205 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.295219 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.295248 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.295279 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.295300 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.302295 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.302323 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.302348 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.303131 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.303168 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.303195 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.303224 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.303245 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.303266 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.307734 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.307767 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.307792 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.308357 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.308379 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.308390 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.308417 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.308445 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.308467 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.312065 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.312094 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.312115 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.312136 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.312150 573 log.go:181] (0xc000f0e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.312173 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.312210 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.312231 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.312244 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.312255 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.312279 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.312315 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.317848 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.317879 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.317908 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.318382 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.318398 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.318404 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.318429 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.318446 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.318458 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.325599 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.325616 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.325629 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.326378 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.326394 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.326403 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.326419 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.326450 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.326476 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.326491 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.326502 573 log.go:181] (0xc000f0e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.326526 573 log.go:181] (0xc000f0e140) (5) Data frame sent\nI1005 17:03:41.331224 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.331261 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.331296 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.331816 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.331849 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.331876 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.331900 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.331922 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.331956 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.336500 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.336531 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.336554 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.337164 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.337188 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.337208 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.337232 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.337251 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.337286 573 log.go:181] (0xc000f0e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:32072/\nI1005 17:03:41.343847 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.343869 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.343888 573 log.go:181] (0xc000f0e0a0) (3) Data frame sent\nI1005 17:03:41.344788 573 log.go:181] (0xc000212000) Data frame received for 5\nI1005 17:03:41.344815 573 log.go:181] (0xc000f0e140) (5) Data frame handling\nI1005 17:03:41.345000 573 log.go:181] (0xc000212000) Data frame received for 3\nI1005 17:03:41.345027 573 log.go:181] (0xc000f0e0a0) (3) Data frame handling\nI1005 17:03:41.346598 573 log.go:181] (0xc000212000) Data frame received for 1\nI1005 17:03:41.346623 573 log.go:181] (0xc000f0e000) (1) Data frame handling\nI1005 17:03:41.346639 573 log.go:181] (0xc000f0e000) (1) Data frame sent\nI1005 17:03:41.346652 573 log.go:181] (0xc000212000) (0xc000f0e000) Stream removed, broadcasting: 1\nI1005 17:03:41.346685 573 log.go:181] (0xc000212000) Go away received\nI1005 17:03:41.346978 573 log.go:181] (0xc000212000) (0xc000f0e000) Stream removed, broadcasting: 1\nI1005 17:03:41.346998 573 log.go:181] (0xc000212000) (0xc000f0e0a0) Stream removed, broadcasting: 3\nI1005 17:03:41.347006 573 log.go:181] (0xc000212000) (0xc000f0e140) Stream removed, broadcasting: 5\n" Oct 5 17:03:41.352: INFO: stdout: "\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs\naffinity-nodeport-gczbs" Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Received response from host: affinity-nodeport-gczbs Oct 5 17:03:41.352: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2529, will wait for the garbage collector to delete the pods Oct 5 17:03:41.449: INFO: Deleting ReplicationController affinity-nodeport took: 5.184062ms Oct 5 17:03:41.949: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.224588ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:03:50.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2529" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.528 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":45,"skipped":705,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:03:50.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-z22q STEP: Creating a pod to test atomic-volume-subpath Oct 5 17:03:50.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z22q" in namespace "subpath-3556" to be "Succeeded or Failed" Oct 5 17:03:50.221: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.673615ms Oct 5 17:03:52.226: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008169516s Oct 5 17:03:54.230: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 4.012828357s Oct 5 17:03:56.234: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 6.016156401s Oct 5 17:03:58.238: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 8.020484785s Oct 5 17:04:00.243: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 10.02516883s Oct 5 17:04:02.247: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 12.029049886s Oct 5 17:04:04.251: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 14.032862664s Oct 5 17:04:06.254: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 16.036365777s Oct 5 17:04:08.267: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 18.049382199s Oct 5 17:04:10.271: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 20.053794208s Oct 5 17:04:12.276: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 22.058374543s Oct 5 17:04:14.603: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Running", Reason="", readiness=true. Elapsed: 24.385821751s Oct 5 17:04:16.609: INFO: Pod "pod-subpath-test-downwardapi-z22q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.39114146s STEP: Saw pod success Oct 5 17:04:16.609: INFO: Pod "pod-subpath-test-downwardapi-z22q" satisfied condition "Succeeded or Failed" Oct 5 17:04:16.612: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-z22q container test-container-subpath-downwardapi-z22q: STEP: delete the pod Oct 5 17:04:16.678: INFO: Waiting for pod pod-subpath-test-downwardapi-z22q to disappear Oct 5 17:04:16.682: INFO: Pod pod-subpath-test-downwardapi-z22q no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z22q Oct 5 17:04:16.682: INFO: Deleting pod "pod-subpath-test-downwardapi-z22q" in namespace "subpath-3556" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:04:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3556" for this suite. • [SLOW TEST:26.612 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":46,"skipped":712,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:04:16.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:04:17.378: INFO: Checking APIGroup: apiregistration.k8s.io Oct 5 17:04:17.379: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 5 17:04:17.379: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.379: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 5 17:04:17.379: INFO: Checking APIGroup: extensions Oct 5 17:04:17.380: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 5 17:04:17.380: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 5 17:04:17.380: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 5 17:04:17.380: INFO: Checking APIGroup: apps Oct 5 17:04:17.382: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 5 17:04:17.382: INFO: Versions found [{apps/v1 v1}] Oct 5 17:04:17.382: INFO: apps/v1 matches apps/v1 Oct 5 17:04:17.382: INFO: Checking APIGroup: events.k8s.io Oct 5 17:04:17.383: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 5 17:04:17.383: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.383: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 5 17:04:17.383: INFO: Checking APIGroup: authentication.k8s.io Oct 5 17:04:17.384: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 5 17:04:17.384: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.384: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 5 17:04:17.384: INFO: Checking APIGroup: authorization.k8s.io Oct 5 17:04:17.385: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 5 17:04:17.385: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.385: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 5 17:04:17.385: INFO: Checking APIGroup: autoscaling Oct 5 17:04:17.387: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 5 17:04:17.387: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 5 17:04:17.387: INFO: autoscaling/v1 matches autoscaling/v1 Oct 5 17:04:17.387: INFO: Checking APIGroup: batch Oct 5 17:04:17.388: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 5 17:04:17.388: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 5 17:04:17.388: INFO: batch/v1 matches batch/v1 Oct 5 17:04:17.388: INFO: Checking APIGroup: certificates.k8s.io Oct 5 17:04:17.389: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 5 17:04:17.389: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.389: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 5 17:04:17.389: INFO: Checking APIGroup: networking.k8s.io Oct 5 17:04:17.390: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 5 17:04:17.390: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.390: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 5 17:04:17.390: INFO: Checking APIGroup: policy Oct 5 17:04:17.391: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Oct 5 17:04:17.391: INFO: Versions found [{policy/v1beta1 v1beta1}] Oct 5 17:04:17.391: INFO: policy/v1beta1 matches policy/v1beta1 Oct 5 17:04:17.391: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 5 17:04:17.393: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 5 17:04:17.393: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.393: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 5 17:04:17.393: INFO: Checking APIGroup: storage.k8s.io Oct 5 17:04:17.394: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 5 17:04:17.394: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.394: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 5 17:04:17.394: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 5 17:04:17.395: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 5 17:04:17.395: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.395: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 5 17:04:17.395: INFO: Checking APIGroup: apiextensions.k8s.io Oct 5 17:04:17.396: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 5 17:04:17.396: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.396: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 5 17:04:17.396: INFO: Checking APIGroup: scheduling.k8s.io Oct 5 17:04:17.397: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 5 17:04:17.397: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.397: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 5 17:04:17.397: INFO: Checking APIGroup: coordination.k8s.io Oct 5 17:04:17.398: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 5 17:04:17.398: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.398: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 5 17:04:17.398: INFO: Checking APIGroup: node.k8s.io Oct 5 17:04:17.399: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Oct 5 17:04:17.399: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.399: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Oct 5 17:04:17.400: INFO: Checking APIGroup: discovery.k8s.io Oct 5 17:04:17.400: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Oct 5 17:04:17.401: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Oct 5 17:04:17.401: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:04:17.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8298" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":47,"skipped":720,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:04:17.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 17:04:17.480: INFO: Waiting up to 5m0s for pod "pod-da7cbb57-e107-4191-a250-baa149076add" in namespace "emptydir-5611" to be "Succeeded or Failed" Oct 5 17:04:17.495: INFO: Pod "pod-da7cbb57-e107-4191-a250-baa149076add": Phase="Pending", Reason="", readiness=false. Elapsed: 15.341247ms Oct 5 17:04:19.543: INFO: Pod "pod-da7cbb57-e107-4191-a250-baa149076add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063645299s Oct 5 17:04:21.556: INFO: Pod "pod-da7cbb57-e107-4191-a250-baa149076add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076409769s STEP: Saw pod success Oct 5 17:04:21.556: INFO: Pod "pod-da7cbb57-e107-4191-a250-baa149076add" satisfied condition "Succeeded or Failed" Oct 5 17:04:21.560: INFO: Trying to get logs from node latest-worker2 pod pod-da7cbb57-e107-4191-a250-baa149076add container test-container: STEP: delete the pod Oct 5 17:04:21.618: INFO: Waiting for pod pod-da7cbb57-e107-4191-a250-baa149076add to disappear Oct 5 17:04:21.625: INFO: Pod pod-da7cbb57-e107-4191-a250-baa149076add no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:04:21.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5611" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:04:21.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 5 17:04:21.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394160 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:04:21.873: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394161 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:04:21.873: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394162 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 5 17:04:31.929: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394218 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:04:31.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394219 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:04:31.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8357 /api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed 0d8adb3f-7d11-4d95-9683-7d9ea1e78c31 3394220 0 2020-10-05 17:04:21 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 17:04:31 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:04:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8357" for this suite. • [SLOW TEST:10.318 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":49,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:04:31.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5009 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 5 17:04:32.051: INFO: Found 0 stateful pods, waiting for 3 Oct 5 17:04:42.057: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:04:42.057: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:04:42.057: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 5 17:04:52.057: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:04:52.057: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:04:52.057: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:04:52.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5009 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 17:04:52.351: INFO: stderr: "I1005 17:04:52.206895 591 log.go:181] (0xc00082f340) (0xc000c42820) Create stream\nI1005 17:04:52.206952 591 log.go:181] (0xc00082f340) (0xc000c42820) Stream added, broadcasting: 1\nI1005 17:04:52.212245 591 log.go:181] (0xc00082f340) Reply frame received for 1\nI1005 17:04:52.212294 591 log.go:181] (0xc00082f340) (0xc000c42000) Create stream\nI1005 17:04:52.212308 591 log.go:181] (0xc00082f340) (0xc000c42000) Stream added, broadcasting: 3\nI1005 17:04:52.213689 591 log.go:181] (0xc00082f340) Reply frame received for 3\nI1005 17:04:52.213723 591 log.go:181] (0xc00082f340) (0xc000a1a000) Create stream\nI1005 17:04:52.213733 591 log.go:181] (0xc00082f340) (0xc000a1a000) Stream added, broadcasting: 5\nI1005 17:04:52.214698 591 log.go:181] (0xc00082f340) Reply frame received for 5\nI1005 17:04:52.292285 591 log.go:181] (0xc00082f340) Data frame received for 5\nI1005 17:04:52.292313 591 log.go:181] (0xc000a1a000) (5) Data frame handling\nI1005 17:04:52.292331 591 log.go:181] (0xc000a1a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 17:04:52.341065 591 log.go:181] (0xc00082f340) Data frame received for 3\nI1005 17:04:52.341110 591 log.go:181] (0xc000c42000) (3) Data frame handling\nI1005 17:04:52.341143 591 log.go:181] (0xc000c42000) (3) Data frame sent\nI1005 17:04:52.341417 591 log.go:181] (0xc00082f340) Data frame received for 3\nI1005 17:04:52.341442 591 log.go:181] (0xc000c42000) (3) Data frame handling\nI1005 17:04:52.342180 591 log.go:181] (0xc00082f340) Data frame received for 5\nI1005 17:04:52.342199 591 log.go:181] (0xc000a1a000) (5) Data frame handling\nI1005 17:04:52.344363 591 log.go:181] (0xc00082f340) Data frame received for 1\nI1005 17:04:52.344397 591 log.go:181] (0xc000c42820) (1) Data frame handling\nI1005 17:04:52.344420 591 log.go:181] (0xc000c42820) (1) Data frame sent\nI1005 17:04:52.344449 591 log.go:181] (0xc00082f340) (0xc000c42820) Stream removed, broadcasting: 1\nI1005 17:04:52.344488 591 log.go:181] (0xc00082f340) Go away received\nI1005 17:04:52.345199 591 log.go:181] (0xc00082f340) (0xc000c42820) Stream removed, broadcasting: 1\nI1005 17:04:52.345233 591 log.go:181] (0xc00082f340) (0xc000c42000) Stream removed, broadcasting: 3\nI1005 17:04:52.345252 591 log.go:181] (0xc00082f340) (0xc000a1a000) Stream removed, broadcasting: 5\n" Oct 5 17:04:52.352: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 17:04:52.352: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 5 17:05:02.386: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 5 17:05:12.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5009 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 17:05:12.628: INFO: stderr: "I1005 17:05:12.534633 609 log.go:181] (0xc00018c370) (0xc0006b2000) Create stream\nI1005 17:05:12.534680 609 log.go:181] (0xc00018c370) (0xc0006b2000) Stream added, broadcasting: 1\nI1005 17:05:12.536368 609 log.go:181] (0xc00018c370) Reply frame received for 1\nI1005 17:05:12.536421 609 log.go:181] (0xc00018c370) (0xc00053e140) Create stream\nI1005 17:05:12.536453 609 log.go:181] (0xc00018c370) (0xc00053e140) Stream added, broadcasting: 3\nI1005 17:05:12.537477 609 log.go:181] (0xc00018c370) Reply frame received for 3\nI1005 17:05:12.537502 609 log.go:181] (0xc00018c370) (0xc0006b20a0) Create stream\nI1005 17:05:12.537514 609 log.go:181] (0xc00018c370) (0xc0006b20a0) Stream added, broadcasting: 5\nI1005 17:05:12.538633 609 log.go:181] (0xc00018c370) Reply frame received for 5\nI1005 17:05:12.621339 609 log.go:181] (0xc00018c370) Data frame received for 5\nI1005 17:05:12.621371 609 log.go:181] (0xc0006b20a0) (5) Data frame handling\nI1005 17:05:12.621387 609 log.go:181] (0xc0006b20a0) (5) Data frame sent\nI1005 17:05:12.621398 609 log.go:181] (0xc00018c370) Data frame received for 3\nI1005 17:05:12.621406 609 log.go:181] (0xc00053e140) (3) Data frame handling\nI1005 17:05:12.621415 609 log.go:181] (0xc00053e140) (3) Data frame sent\nI1005 17:05:12.621422 609 log.go:181] (0xc00018c370) Data frame received for 3\nI1005 17:05:12.621428 609 log.go:181] (0xc00053e140) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 17:05:12.621579 609 log.go:181] (0xc00018c370) Data frame received for 5\nI1005 17:05:12.621598 609 log.go:181] (0xc0006b20a0) (5) Data frame handling\nI1005 17:05:12.623004 609 log.go:181] (0xc00018c370) Data frame received for 1\nI1005 17:05:12.623045 609 log.go:181] (0xc0006b2000) (1) Data frame handling\nI1005 17:05:12.623061 609 log.go:181] (0xc0006b2000) (1) Data frame sent\nI1005 17:05:12.623079 609 log.go:181] (0xc00018c370) (0xc0006b2000) Stream removed, broadcasting: 1\nI1005 17:05:12.623523 609 log.go:181] (0xc00018c370) (0xc0006b2000) Stream removed, broadcasting: 1\nI1005 17:05:12.623560 609 log.go:181] (0xc00018c370) (0xc00053e140) Stream removed, broadcasting: 3\nI1005 17:05:12.623573 609 log.go:181] (0xc00018c370) (0xc0006b20a0) Stream removed, broadcasting: 5\n" Oct 5 17:05:12.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 17:05:12.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 17:05:22.686: INFO: Waiting for StatefulSet statefulset-5009/ss2 to complete update Oct 5 17:05:22.686: INFO: Waiting for Pod statefulset-5009/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:05:22.686: INFO: Waiting for Pod statefulset-5009/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:05:32.733: INFO: Waiting for StatefulSet statefulset-5009/ss2 to complete update Oct 5 17:05:32.733: INFO: Waiting for Pod statefulset-5009/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:05:42.697: INFO: Waiting for StatefulSet statefulset-5009/ss2 to complete update STEP: Rolling back to a previous revision Oct 5 17:05:52.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5009 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 17:05:52.943: INFO: stderr: "I1005 17:05:52.822367 627 log.go:181] (0xc00013f4a0) (0xc000136a00) Create stream\nI1005 17:05:52.822421 627 log.go:181] (0xc00013f4a0) (0xc000136a00) Stream added, broadcasting: 1\nI1005 17:05:52.825606 627 log.go:181] (0xc00013f4a0) Reply frame received for 1\nI1005 17:05:52.825637 627 log.go:181] (0xc00013f4a0) (0xc000136000) Create stream\nI1005 17:05:52.825647 627 log.go:181] (0xc00013f4a0) (0xc000136000) Stream added, broadcasting: 3\nI1005 17:05:52.826300 627 log.go:181] (0xc00013f4a0) Reply frame received for 3\nI1005 17:05:52.826322 627 log.go:181] (0xc00013f4a0) (0xc00099e280) Create stream\nI1005 17:05:52.826328 627 log.go:181] (0xc00013f4a0) (0xc00099e280) Stream added, broadcasting: 5\nI1005 17:05:52.827025 627 log.go:181] (0xc00013f4a0) Reply frame received for 5\nI1005 17:05:52.903889 627 log.go:181] (0xc00013f4a0) Data frame received for 5\nI1005 17:05:52.903913 627 log.go:181] (0xc00099e280) (5) Data frame handling\nI1005 17:05:52.903931 627 log.go:181] (0xc00099e280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 17:05:52.932991 627 log.go:181] (0xc00013f4a0) Data frame received for 3\nI1005 17:05:52.933024 627 log.go:181] (0xc000136000) (3) Data frame handling\nI1005 17:05:52.933055 627 log.go:181] (0xc000136000) (3) Data frame sent\nI1005 17:05:52.933302 627 log.go:181] (0xc00013f4a0) Data frame received for 3\nI1005 17:05:52.933320 627 log.go:181] (0xc000136000) (3) Data frame handling\nI1005 17:05:52.933363 627 log.go:181] (0xc00013f4a0) Data frame received for 5\nI1005 17:05:52.933415 627 log.go:181] (0xc00099e280) (5) Data frame handling\nI1005 17:05:52.937411 627 log.go:181] (0xc00013f4a0) Data frame received for 1\nI1005 17:05:52.937474 627 log.go:181] (0xc000136a00) (1) Data frame handling\nI1005 17:05:52.937519 627 log.go:181] (0xc000136a00) (1) Data frame sent\nI1005 17:05:52.937561 627 log.go:181] (0xc00013f4a0) (0xc000136a00) Stream removed, broadcasting: 1\nI1005 17:05:52.937591 627 log.go:181] (0xc00013f4a0) Go away received\nI1005 17:05:52.938029 627 log.go:181] (0xc00013f4a0) (0xc000136a00) Stream removed, broadcasting: 1\nI1005 17:05:52.938046 627 log.go:181] (0xc00013f4a0) (0xc000136000) Stream removed, broadcasting: 3\nI1005 17:05:52.938054 627 log.go:181] (0xc00013f4a0) (0xc00099e280) Stream removed, broadcasting: 5\n" Oct 5 17:05:52.943: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 17:05:52.943: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 17:06:02.978: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 5 17:06:13.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5009 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 17:06:13.257: INFO: stderr: "I1005 17:06:13.158387 645 log.go:181] (0xc00003b3f0) (0xc0004392c0) Create stream\nI1005 17:06:13.158446 645 log.go:181] (0xc00003b3f0) (0xc0004392c0) Stream added, broadcasting: 1\nI1005 17:06:13.162355 645 log.go:181] (0xc00003b3f0) Reply frame received for 1\nI1005 17:06:13.162406 645 log.go:181] (0xc00003b3f0) (0xc000439900) Create stream\nI1005 17:06:13.162425 645 log.go:181] (0xc00003b3f0) (0xc000439900) Stream added, broadcasting: 3\nI1005 17:06:13.163689 645 log.go:181] (0xc00003b3f0) Reply frame received for 3\nI1005 17:06:13.163731 645 log.go:181] (0xc00003b3f0) (0xc000c46000) Create stream\nI1005 17:06:13.163748 645 log.go:181] (0xc00003b3f0) (0xc000c46000) Stream added, broadcasting: 5\nI1005 17:06:13.164590 645 log.go:181] (0xc00003b3f0) Reply frame received for 5\nI1005 17:06:13.250606 645 log.go:181] (0xc00003b3f0) Data frame received for 5\nI1005 17:06:13.250634 645 log.go:181] (0xc000c46000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 17:06:13.250667 645 log.go:181] (0xc00003b3f0) Data frame received for 3\nI1005 17:06:13.250721 645 log.go:181] (0xc000439900) (3) Data frame handling\nI1005 17:06:13.250743 645 log.go:181] (0xc000439900) (3) Data frame sent\nI1005 17:06:13.250761 645 log.go:181] (0xc00003b3f0) Data frame received for 3\nI1005 17:06:13.250790 645 log.go:181] (0xc000439900) (3) Data frame handling\nI1005 17:06:13.250815 645 log.go:181] (0xc000c46000) (5) Data frame sent\nI1005 17:06:13.250837 645 log.go:181] (0xc00003b3f0) Data frame received for 5\nI1005 17:06:13.250861 645 log.go:181] (0xc000c46000) (5) Data frame handling\nI1005 17:06:13.252405 645 log.go:181] (0xc00003b3f0) Data frame received for 1\nI1005 17:06:13.252418 645 log.go:181] (0xc0004392c0) (1) Data frame handling\nI1005 17:06:13.252432 645 log.go:181] (0xc0004392c0) (1) Data frame sent\nI1005 17:06:13.252442 645 log.go:181] (0xc00003b3f0) (0xc0004392c0) Stream removed, broadcasting: 1\nI1005 17:06:13.252529 645 log.go:181] (0xc00003b3f0) Go away received\nI1005 17:06:13.252713 645 log.go:181] (0xc00003b3f0) (0xc0004392c0) Stream removed, broadcasting: 1\nI1005 17:06:13.252727 645 log.go:181] (0xc00003b3f0) (0xc000439900) Stream removed, broadcasting: 3\nI1005 17:06:13.252733 645 log.go:181] (0xc00003b3f0) (0xc000c46000) Stream removed, broadcasting: 5\n" Oct 5 17:06:13.257: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 17:06:13.257: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 17:06:23.278: INFO: Waiting for StatefulSet statefulset-5009/ss2 to complete update Oct 5 17:06:23.279: INFO: Waiting for Pod statefulset-5009/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 5 17:06:23.279: INFO: Waiting for Pod statefulset-5009/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 5 17:06:33.286: INFO: Waiting for StatefulSet statefulset-5009/ss2 to complete update Oct 5 17:06:33.286: INFO: Waiting for Pod statefulset-5009/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 17:06:43.288: INFO: Deleting all statefulset in ns statefulset-5009 Oct 5 17:06:43.291: INFO: Scaling statefulset ss2 to 0 Oct 5 17:07:13.307: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 17:07:13.310: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:07:13.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5009" for this suite. • [SLOW TEST:161.383 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":50,"skipped":766,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:07:13.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:07:13.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6614" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":51,"skipped":770,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:07:13.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:07:13.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e" in namespace "downward-api-4066" to be "Succeeded or Failed" Oct 5 17:07:13.503: INFO: Pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.611723ms Oct 5 17:07:15.521: INFO: Pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048850096s Oct 5 17:07:17.525: INFO: Pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e": Phase="Running", Reason="", readiness=true. Elapsed: 4.052872341s Oct 5 17:07:19.529: INFO: Pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056717748s STEP: Saw pod success Oct 5 17:07:19.529: INFO: Pod "downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e" satisfied condition "Succeeded or Failed" Oct 5 17:07:19.532: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e container client-container: STEP: delete the pod Oct 5 17:07:19.562: INFO: Waiting for pod downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e to disappear Oct 5 17:07:19.579: INFO: Pod downwardapi-volume-b3d75ace-cb34-4ce5-920d-01251b80680e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:07:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4066" for this suite. • [SLOW TEST:6.204 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":778,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:07:19.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-7zjx STEP: Creating a pod to test atomic-volume-subpath Oct 5 17:07:19.735: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7zjx" in namespace "subpath-2199" to be "Succeeded or Failed" Oct 5 17:07:19.778: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Pending", Reason="", readiness=false. Elapsed: 43.104576ms Oct 5 17:07:21.782: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047027141s Oct 5 17:07:23.788: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 4.052741295s Oct 5 17:07:25.794: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 6.059290049s Oct 5 17:07:27.799: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.063633502s Oct 5 17:07:29.804: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.06849245s Oct 5 17:07:31.808: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.072614404s Oct 5 17:07:33.811: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.076363291s Oct 5 17:07:35.815: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.079906463s Oct 5 17:07:37.820: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.084673168s Oct 5 17:07:39.823: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.088201952s Oct 5 17:07:41.826: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.091251778s Oct 5 17:07:43.855: INFO: Pod "pod-subpath-test-configmap-7zjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120392644s STEP: Saw pod success Oct 5 17:07:43.856: INFO: Pod "pod-subpath-test-configmap-7zjx" satisfied condition "Succeeded or Failed" Oct 5 17:07:43.858: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-7zjx container test-container-subpath-configmap-7zjx: STEP: delete the pod Oct 5 17:07:43.917: INFO: Waiting for pod pod-subpath-test-configmap-7zjx to disappear Oct 5 17:07:43.926: INFO: Pod pod-subpath-test-configmap-7zjx no longer exists STEP: Deleting pod pod-subpath-test-configmap-7zjx Oct 5 17:07:43.926: INFO: Deleting pod "pod-subpath-test-configmap-7zjx" in namespace "subpath-2199" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:07:43.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2199" for this suite. • [SLOW TEST:24.342 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":53,"skipped":791,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:07:43.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 17:07:43.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-776' Oct 5 17:07:47.011: INFO: stderr: "" Oct 5 17:07:47.011: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Oct 5 17:07:47.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-776' Oct 5 17:07:59.864: INFO: stderr: "" Oct 5 17:07:59.864: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:07:59.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-776" for this suite. • [SLOW TEST:15.933 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":54,"skipped":793,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:07:59.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:07:59.944: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:08:06.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4234" for this suite. • [SLOW TEST:6.241 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:08:06.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5414.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5414.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5414.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5414.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.19.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.19.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.19.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.19.118_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5414.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5414.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5414.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5414.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5414.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5414.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 118.19.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.19.118_udp@PTR;check="$$(dig +tcp +noall +answer +search 118.19.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.19.118_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 17:08:12.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.394: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.397: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.399: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.419: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.429: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:12.442: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:17.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:17.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:17.480: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:17.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:17.508: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:22.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:22.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:22.460: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: Get "https://172.30.12.66:35633/api/v1/namespaces/dns-5414/pods/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f/proxy/results/wheezy_udp@_http._tcp.test-service-2.dns-5414.svc.cluster.local": stream error: stream ID 3309; INTERNAL_ERROR Oct 5 17:08:22.473: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:22.475: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:22.494: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:27.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:27.452: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:27.481: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:27.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:27.508: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:32.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:32.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:32.474: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:32.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:32.497: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:37.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:37.451: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:37.481: INFO: Unable to read jessie_udp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:37.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-5414.svc.cluster.local from pod dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f: the server could not find the requested resource (get pods dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f) Oct 5 17:08:37.509: INFO: Lookups using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f failed for: [wheezy_udp@dns-test-service.dns-5414.svc.cluster.local wheezy_tcp@dns-test-service.dns-5414.svc.cluster.local jessie_udp@dns-test-service.dns-5414.svc.cluster.local jessie_tcp@dns-test-service.dns-5414.svc.cluster.local] Oct 5 17:08:42.507: INFO: DNS probes using dns-5414/dns-test-30ab32f1-fb52-4e27-9df0-221da5e4e31f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:08:42.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5414" for this suite. • [SLOW TEST:36.893 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":56,"skipped":845,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:08:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8094 STEP: creating service affinity-clusterip in namespace services-8094 STEP: creating replication controller affinity-clusterip in namespace services-8094 I1005 17:08:43.336321 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8094, replica count: 3 I1005 17:08:46.386790 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:08:49.387041 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:08:49.393: INFO: Creating new exec pod Oct 5 17:08:54.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-8094 execpod-affinityxvxgc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Oct 5 17:08:54.625: INFO: stderr: "I1005 17:08:54.555015 701 log.go:181] (0xc00018db80) (0xc00081a8c0) Create stream\nI1005 17:08:54.555086 701 log.go:181] (0xc00018db80) (0xc00081a8c0) Stream added, broadcasting: 1\nI1005 17:08:54.558099 701 log.go:181] (0xc00018db80) Reply frame received for 1\nI1005 17:08:54.558156 701 log.go:181] (0xc00018db80) (0xc000cc4000) Create stream\nI1005 17:08:54.558179 701 log.go:181] (0xc00018db80) (0xc000cc4000) Stream added, broadcasting: 3\nI1005 17:08:54.559280 701 log.go:181] (0xc00018db80) Reply frame received for 3\nI1005 17:08:54.559323 701 log.go:181] (0xc00018db80) (0xc00057e320) Create stream\nI1005 17:08:54.559338 701 log.go:181] (0xc00018db80) (0xc00057e320) Stream added, broadcasting: 5\nI1005 17:08:54.560306 701 log.go:181] (0xc00018db80) Reply frame received for 5\nI1005 17:08:54.618580 701 log.go:181] (0xc00018db80) Data frame received for 5\nI1005 17:08:54.618614 701 log.go:181] (0xc00057e320) (5) Data frame handling\nI1005 17:08:54.618624 701 log.go:181] (0xc00057e320) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1005 17:08:54.618641 701 log.go:181] (0xc00018db80) Data frame received for 3\nI1005 17:08:54.618664 701 log.go:181] (0xc000cc4000) (3) Data frame handling\nI1005 17:08:54.618689 701 log.go:181] (0xc00018db80) Data frame received for 5\nI1005 17:08:54.618697 701 log.go:181] (0xc00057e320) (5) Data frame handling\nI1005 17:08:54.620394 701 log.go:181] (0xc00018db80) Data frame received for 1\nI1005 17:08:54.620412 701 log.go:181] (0xc00081a8c0) (1) Data frame handling\nI1005 17:08:54.620421 701 log.go:181] (0xc00081a8c0) (1) Data frame sent\nI1005 17:08:54.620430 701 log.go:181] (0xc00018db80) (0xc00081a8c0) Stream removed, broadcasting: 1\nI1005 17:08:54.620458 701 log.go:181] (0xc00018db80) Go away received\nI1005 17:08:54.620733 701 log.go:181] (0xc00018db80) (0xc00081a8c0) Stream removed, broadcasting: 1\nI1005 17:08:54.620745 701 log.go:181] (0xc00018db80) (0xc000cc4000) Stream removed, broadcasting: 3\nI1005 17:08:54.620750 701 log.go:181] (0xc00018db80) (0xc00057e320) Stream removed, broadcasting: 5\n" Oct 5 17:08:54.625: INFO: stdout: "" Oct 5 17:08:54.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-8094 execpod-affinityxvxgc -- /bin/sh -x -c nc -zv -t -w 2 10.106.46.242 80' Oct 5 17:08:54.823: INFO: stderr: "I1005 17:08:54.756338 719 log.go:181] (0xc000dc6fd0) (0xc00049ef00) Create stream\nI1005 17:08:54.756436 719 log.go:181] (0xc000dc6fd0) (0xc00049ef00) Stream added, broadcasting: 1\nI1005 17:08:54.762452 719 log.go:181] (0xc000dc6fd0) Reply frame received for 1\nI1005 17:08:54.762512 719 log.go:181] (0xc000dc6fd0) (0xc00049f680) Create stream\nI1005 17:08:54.762536 719 log.go:181] (0xc000dc6fd0) (0xc00049f680) Stream added, broadcasting: 3\nI1005 17:08:54.763446 719 log.go:181] (0xc000dc6fd0) Reply frame received for 3\nI1005 17:08:54.763496 719 log.go:181] (0xc000dc6fd0) (0xc000ab41e0) Create stream\nI1005 17:08:54.763519 719 log.go:181] (0xc000dc6fd0) (0xc000ab41e0) Stream added, broadcasting: 5\nI1005 17:08:54.764461 719 log.go:181] (0xc000dc6fd0) Reply frame received for 5\nI1005 17:08:54.816397 719 log.go:181] (0xc000dc6fd0) Data frame received for 5\nI1005 17:08:54.816459 719 log.go:181] (0xc000ab41e0) (5) Data frame handling\nI1005 17:08:54.816506 719 log.go:181] (0xc000ab41e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.46.242 80\nConnection to 10.106.46.242 80 port [tcp/http] succeeded!\nI1005 17:08:54.816572 719 log.go:181] (0xc000dc6fd0) Data frame received for 5\nI1005 17:08:54.816600 719 log.go:181] (0xc000ab41e0) (5) Data frame handling\nI1005 17:08:54.816710 719 log.go:181] (0xc000dc6fd0) Data frame received for 3\nI1005 17:08:54.816730 719 log.go:181] (0xc00049f680) (3) Data frame handling\nI1005 17:08:54.818161 719 log.go:181] (0xc000dc6fd0) Data frame received for 1\nI1005 17:08:54.818270 719 log.go:181] (0xc00049ef00) (1) Data frame handling\nI1005 17:08:54.818316 719 log.go:181] (0xc00049ef00) (1) Data frame sent\nI1005 17:08:54.818344 719 log.go:181] (0xc000dc6fd0) (0xc00049ef00) Stream removed, broadcasting: 1\nI1005 17:08:54.818368 719 log.go:181] (0xc000dc6fd0) Go away received\nI1005 17:08:54.818670 719 log.go:181] (0xc000dc6fd0) (0xc00049ef00) Stream removed, broadcasting: 1\nI1005 17:08:54.818683 719 log.go:181] (0xc000dc6fd0) (0xc00049f680) Stream removed, broadcasting: 3\nI1005 17:08:54.818691 719 log.go:181] (0xc000dc6fd0) (0xc000ab41e0) Stream removed, broadcasting: 5\n" Oct 5 17:08:54.823: INFO: stdout: "" Oct 5 17:08:54.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-8094 execpod-affinityxvxgc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.106.46.242:80/ ; done' Oct 5 17:08:55.148: INFO: stderr: "I1005 17:08:54.961285 737 log.go:181] (0xc000e14dc0) (0xc00061f360) Create stream\nI1005 17:08:54.961337 737 log.go:181] (0xc000e14dc0) (0xc00061f360) Stream added, broadcasting: 1\nI1005 17:08:54.966235 737 log.go:181] (0xc000e14dc0) Reply frame received for 1\nI1005 17:08:54.966276 737 log.go:181] (0xc000e14dc0) (0xc000d90000) Create stream\nI1005 17:08:54.966305 737 log.go:181] (0xc000e14dc0) (0xc000d90000) Stream added, broadcasting: 3\nI1005 17:08:54.967330 737 log.go:181] (0xc000e14dc0) Reply frame received for 3\nI1005 17:08:54.967368 737 log.go:181] (0xc000e14dc0) (0xc0003cbf40) Create stream\nI1005 17:08:54.967385 737 log.go:181] (0xc000e14dc0) (0xc0003cbf40) Stream added, broadcasting: 5\nI1005 17:08:54.968106 737 log.go:181] (0xc000e14dc0) Reply frame received for 5\nI1005 17:08:55.028172 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.028213 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.028225 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.028243 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.028251 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.028265 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.034786 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.034806 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.034819 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.035349 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.035367 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.035394 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.035430 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.035444 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.035464 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.042791 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.042828 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.042857 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.043633 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.043652 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.043673 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.043703 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.043724 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.043744 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.047426 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.047448 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.047465 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.047869 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.047904 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.047915 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.047939 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.047961 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.047982 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.055321 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.055351 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.055370 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.056279 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.056312 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.056325 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.056341 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.056351 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.056361 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.062398 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.062419 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.062438 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.063409 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.063445 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.063497 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.063524 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.063545 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.063559 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.069451 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.069478 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.069501 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.070152 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.070174 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.070192 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.070223 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.070236 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.070255 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.076486 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.076512 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.076526 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.077481 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.077522 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.077551 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.077600 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.077687 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.077739 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.085367 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.085384 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.085393 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.086411 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.086425 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.086433 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.086457 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.086484 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.086517 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.093464 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.093494 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.093520 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.094458 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.094487 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.094498 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.094513 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.094521 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.094530 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.098205 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.098229 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.098247 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.099127 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.099177 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.099201 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.099222 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.099234 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.099253 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.106827 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.106862 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.106895 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.107292 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.107321 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.107334 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.107344 737 log.go:181] (0xc000e14dc0) Data frame received for 5\n+ echo\n+ curlI1005 17:08:55.107363 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.107382 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.107422 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.107451 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.107474 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.111989 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.112016 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.112037 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.112761 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.112786 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.112800 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.112816 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.112916 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.112939 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.119237 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.119263 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.119284 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.119913 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.119944 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.119983 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.120011 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.120033 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.120072 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.120092 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.120113 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.120148 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.126393 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.126419 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.126441 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.127184 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.127206 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.127224 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.127243 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.127256 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.127269 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.127281 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.127291 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.127313 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\nI1005 17:08:55.133186 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.133211 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.133254 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.134040 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.134076 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.134091 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.134115 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.134139 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.134161 737 log.go:181] (0xc0003cbf40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.106.46.242:80/\nI1005 17:08:55.137440 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.137468 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.137496 737 log.go:181] (0xc000d90000) (3) Data frame sent\nI1005 17:08:55.137749 737 log.go:181] (0xc000e14dc0) Data frame received for 3\nI1005 17:08:55.137782 737 log.go:181] (0xc000d90000) (3) Data frame handling\nI1005 17:08:55.138256 737 log.go:181] (0xc000e14dc0) Data frame received for 5\nI1005 17:08:55.138274 737 log.go:181] (0xc0003cbf40) (5) Data frame handling\nI1005 17:08:55.139721 737 log.go:181] (0xc000e14dc0) Data frame received for 1\nI1005 17:08:55.139754 737 log.go:181] (0xc00061f360) (1) Data frame handling\nI1005 17:08:55.139773 737 log.go:181] (0xc00061f360) (1) Data frame sent\nI1005 17:08:55.139799 737 log.go:181] (0xc000e14dc0) (0xc00061f360) Stream removed, broadcasting: 1\nI1005 17:08:55.139829 737 log.go:181] (0xc000e14dc0) Go away received\nI1005 17:08:55.140212 737 log.go:181] (0xc000e14dc0) (0xc00061f360) Stream removed, broadcasting: 1\nI1005 17:08:55.140235 737 log.go:181] (0xc000e14dc0) (0xc000d90000) Stream removed, broadcasting: 3\nI1005 17:08:55.140247 737 log.go:181] (0xc000e14dc0) (0xc0003cbf40) Stream removed, broadcasting: 5\n" Oct 5 17:08:55.149: INFO: stdout: "\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f\naffinity-clusterip-29m8f" Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Received response from host: affinity-clusterip-29m8f Oct 5 17:08:55.149: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8094, will wait for the garbage collector to delete the pods Oct 5 17:08:55.265: INFO: Deleting ReplicationController affinity-clusterip took: 21.815986ms Oct 5 17:08:55.765: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.259335ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:00.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8094" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.441 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":57,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:00.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 5 17:09:05.110: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e47e197e-cd7d-499e-a787-3397152d2806" Oct 5 17:09:05.110: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e47e197e-cd7d-499e-a787-3397152d2806" in namespace "pods-3880" to be "terminated due to deadline exceeded" Oct 5 17:09:05.126: INFO: Pod "pod-update-activedeadlineseconds-e47e197e-cd7d-499e-a787-3397152d2806": Phase="Running", Reason="", readiness=true. Elapsed: 15.820384ms Oct 5 17:09:07.131: INFO: Pod "pod-update-activedeadlineseconds-e47e197e-cd7d-499e-a787-3397152d2806": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021437494s Oct 5 17:09:07.132: INFO: Pod "pod-update-activedeadlineseconds-e47e197e-cd7d-499e-a787-3397152d2806" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:07.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3880" for this suite. • [SLOW TEST:6.694 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:07.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Oct 5 17:09:07.354: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Oct 5 17:09:07.913: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 5 17:09:10.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514547, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514547, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514548, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514547, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:09:13.255: INFO: Waited 822.561549ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:13.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6321" for this suite. • [SLOW TEST:6.760 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":59,"skipped":960,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:13.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:14.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9244" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":60,"skipped":973,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:14.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 5 17:09:14.548: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:29.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4671" for this suite. • [SLOW TEST:15.772 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":977,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:29.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:09:29.958: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 17:09:32.925: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7401 create -f -' Oct 5 17:09:36.972: INFO: stderr: "" Oct 5 17:09:36.972: INFO: stdout: "e2e-test-crd-publish-openapi-4652-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 5 17:09:36.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7401 delete e2e-test-crd-publish-openapi-4652-crds test-cr' Oct 5 17:09:37.072: INFO: stderr: "" Oct 5 17:09:37.072: INFO: stdout: "e2e-test-crd-publish-openapi-4652-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 5 17:09:37.072: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7401 apply -f -' Oct 5 17:09:37.378: INFO: stderr: "" Oct 5 17:09:37.378: INFO: stdout: "e2e-test-crd-publish-openapi-4652-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 5 17:09:37.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7401 delete e2e-test-crd-publish-openapi-4652-crds test-cr' Oct 5 17:09:37.524: INFO: stderr: "" Oct 5 17:09:37.524: INFO: stdout: "e2e-test-crd-publish-openapi-4652-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 5 17:09:37.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4652-crds' Oct 5 17:09:37.791: INFO: stderr: "" Oct 5 17:09:37.791: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4652-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:40.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7401" for this suite. • [SLOW TEST:10.853 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":62,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:40.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:09:40.818: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:41.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7157" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":63,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:41.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Oct 5 17:09:41.473: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix771729395/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:41.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6053" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":64,"skipped":1078,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:41.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 5 17:09:41.669: INFO: Waiting up to 5m0s for pod "pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a" in namespace "emptydir-7175" to be "Succeeded or Failed" Oct 5 17:09:41.673: INFO: Pod "pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327089ms Oct 5 17:09:43.695: INFO: Pod "pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025992394s Oct 5 17:09:45.707: INFO: Pod "pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037992348s STEP: Saw pod success Oct 5 17:09:45.707: INFO: Pod "pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a" satisfied condition "Succeeded or Failed" Oct 5 17:09:45.710: INFO: Trying to get logs from node latest-worker2 pod pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a container test-container: STEP: delete the pod Oct 5 17:09:45.763: INFO: Waiting for pod pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a to disappear Oct 5 17:09:45.774: INFO: Pod pod-476e1945-a3ee-4c06-ab55-fe7a4c4dd71a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:45.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7175" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":1079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:45.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:45.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3437" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":66,"skipped":1117,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:45.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:09:45.928: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:50.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9922" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":67,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:50.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 5 17:09:50.125: INFO: Waiting up to 5m0s for pod "pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae" in namespace "emptydir-4626" to be "Succeeded or Failed" Oct 5 17:09:50.141: INFO: Pod "pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae": Phase="Pending", Reason="", readiness=false. Elapsed: 16.127623ms Oct 5 17:09:52.145: INFO: Pod "pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020232794s Oct 5 17:09:54.150: INFO: Pod "pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02518163s STEP: Saw pod success Oct 5 17:09:54.150: INFO: Pod "pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae" satisfied condition "Succeeded or Failed" Oct 5 17:09:54.153: INFO: Trying to get logs from node latest-worker pod pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae container test-container: STEP: delete the pod Oct 5 17:09:54.193: INFO: Waiting for pod pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae to disappear Oct 5 17:09:54.213: INFO: Pod pod-dc97ea65-255b-4e7a-a20a-7c27974f14ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:54.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4626" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":68,"skipped":1141,"failed":0} ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:54.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 17:09:54.352: INFO: Waiting up to 5m0s for pod "downward-api-05cc10af-a6cf-4381-ae49-20886bd56330" in namespace "downward-api-5163" to be "Succeeded or Failed" Oct 5 17:09:54.362: INFO: Pod "downward-api-05cc10af-a6cf-4381-ae49-20886bd56330": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15403ms Oct 5 17:09:56.552: INFO: Pod "downward-api-05cc10af-a6cf-4381-ae49-20886bd56330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199764816s Oct 5 17:09:58.556: INFO: Pod "downward-api-05cc10af-a6cf-4381-ae49-20886bd56330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204184045s STEP: Saw pod success Oct 5 17:09:58.557: INFO: Pod "downward-api-05cc10af-a6cf-4381-ae49-20886bd56330" satisfied condition "Succeeded or Failed" Oct 5 17:09:58.560: INFO: Trying to get logs from node latest-worker2 pod downward-api-05cc10af-a6cf-4381-ae49-20886bd56330 container dapi-container: STEP: delete the pod Oct 5 17:09:58.588: INFO: Waiting for pod downward-api-05cc10af-a6cf-4381-ae49-20886bd56330 to disappear Oct 5 17:09:58.596: INFO: Pod downward-api-05cc10af-a6cf-4381-ae49-20886bd56330 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:09:58.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5163" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:09:58.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 5 17:10:02.693: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7828 PodName:var-expansion-9ff5e320-5c0e-4cdd-b59d-ee74a249515d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:10:02.693: INFO: >>> kubeConfig: /root/.kube/config I1005 17:10:02.719564 7 log.go:181] (0xc00331f130) (0xc000e85ae0) Create stream I1005 17:10:02.719596 7 log.go:181] (0xc00331f130) (0xc000e85ae0) Stream added, broadcasting: 1 I1005 17:10:02.721515 7 log.go:181] (0xc00331f130) Reply frame received for 1 I1005 17:10:02.721556 7 log.go:181] (0xc00331f130) (0xc000e85b80) Create stream I1005 17:10:02.721575 7 log.go:181] (0xc00331f130) (0xc000e85b80) Stream added, broadcasting: 3 I1005 17:10:02.722611 7 log.go:181] (0xc00331f130) Reply frame received for 3 I1005 17:10:02.722668 7 log.go:181] (0xc00331f130) (0xc000e85c20) Create stream I1005 17:10:02.722684 7 log.go:181] (0xc00331f130) (0xc000e85c20) Stream added, broadcasting: 5 I1005 17:10:02.723480 7 log.go:181] (0xc00331f130) Reply frame received for 5 I1005 17:10:02.810995 7 log.go:181] (0xc00331f130) Data frame received for 5 I1005 17:10:02.811052 7 log.go:181] (0xc000e85c20) (5) Data frame handling I1005 17:10:02.811095 7 log.go:181] (0xc00331f130) Data frame received for 3 I1005 17:10:02.811119 7 log.go:181] (0xc000e85b80) (3) Data frame handling I1005 17:10:02.813076 7 log.go:181] (0xc00331f130) Data frame received for 1 I1005 17:10:02.813126 7 log.go:181] (0xc000e85ae0) (1) Data frame handling I1005 17:10:02.813171 7 log.go:181] (0xc000e85ae0) (1) Data frame sent I1005 17:10:02.813251 7 log.go:181] (0xc00331f130) (0xc000e85ae0) Stream removed, broadcasting: 1 I1005 17:10:02.813330 7 log.go:181] (0xc00331f130) Go away received I1005 17:10:02.813408 7 log.go:181] (0xc00331f130) (0xc000e85ae0) Stream removed, broadcasting: 1 I1005 17:10:02.813425 7 log.go:181] (0xc00331f130) (0xc000e85b80) Stream removed, broadcasting: 3 I1005 17:10:02.813431 7 log.go:181] (0xc00331f130) (0xc000e85c20) Stream removed, broadcasting: 5 STEP: test for file in mounted path Oct 5 17:10:02.816: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7828 PodName:var-expansion-9ff5e320-5c0e-4cdd-b59d-ee74a249515d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:10:02.816: INFO: >>> kubeConfig: /root/.kube/config I1005 17:10:02.848761 7 log.go:181] (0xc000517ce0) (0xc003565400) Create stream I1005 17:10:02.848800 7 log.go:181] (0xc000517ce0) (0xc003565400) Stream added, broadcasting: 1 I1005 17:10:02.850948 7 log.go:181] (0xc000517ce0) Reply frame received for 1 I1005 17:10:02.851004 7 log.go:181] (0xc000517ce0) (0xc000e85cc0) Create stream I1005 17:10:02.851031 7 log.go:181] (0xc000517ce0) (0xc000e85cc0) Stream added, broadcasting: 3 I1005 17:10:02.852139 7 log.go:181] (0xc000517ce0) Reply frame received for 3 I1005 17:10:02.852165 7 log.go:181] (0xc000517ce0) (0xc000e2c8c0) Create stream I1005 17:10:02.852172 7 log.go:181] (0xc000517ce0) (0xc000e2c8c0) Stream added, broadcasting: 5 I1005 17:10:02.853212 7 log.go:181] (0xc000517ce0) Reply frame received for 5 I1005 17:10:02.915496 7 log.go:181] (0xc000517ce0) Data frame received for 5 I1005 17:10:02.915525 7 log.go:181] (0xc000e2c8c0) (5) Data frame handling I1005 17:10:02.915547 7 log.go:181] (0xc000517ce0) Data frame received for 3 I1005 17:10:02.915552 7 log.go:181] (0xc000e85cc0) (3) Data frame handling I1005 17:10:02.917323 7 log.go:181] (0xc000517ce0) Data frame received for 1 I1005 17:10:02.917365 7 log.go:181] (0xc003565400) (1) Data frame handling I1005 17:10:02.917382 7 log.go:181] (0xc003565400) (1) Data frame sent I1005 17:10:02.917402 7 log.go:181] (0xc000517ce0) (0xc003565400) Stream removed, broadcasting: 1 I1005 17:10:02.917422 7 log.go:181] (0xc000517ce0) Go away received I1005 17:10:02.917612 7 log.go:181] (0xc000517ce0) (0xc003565400) Stream removed, broadcasting: 1 I1005 17:10:02.917666 7 log.go:181] (0xc000517ce0) (0xc000e85cc0) Stream removed, broadcasting: 3 I1005 17:10:02.917688 7 log.go:181] (0xc000517ce0) (0xc000e2c8c0) Stream removed, broadcasting: 5 STEP: updating the annotation value Oct 5 17:10:03.428: INFO: Successfully updated pod "var-expansion-9ff5e320-5c0e-4cdd-b59d-ee74a249515d" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 5 17:10:03.498: INFO: Deleting pod "var-expansion-9ff5e320-5c0e-4cdd-b59d-ee74a249515d" in namespace "var-expansion-7828" Oct 5 17:10:03.503: INFO: Wait up to 5m0s for pod "var-expansion-9ff5e320-5c0e-4cdd-b59d-ee74a249515d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:10:41.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7828" for this suite. • [SLOW TEST:42.943 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":70,"skipped":1176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:10:41.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:10:45.711: INFO: Waiting up to 5m0s for pod "client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0" in namespace "pods-8282" to be "Succeeded or Failed" Oct 5 17:10:45.767: INFO: Pod "client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 56.553795ms Oct 5 17:10:47.773: INFO: Pod "client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062489638s Oct 5 17:10:50.139: INFO: Pod "client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.42794537s STEP: Saw pod success Oct 5 17:10:50.139: INFO: Pod "client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0" satisfied condition "Succeeded or Failed" Oct 5 17:10:50.142: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0 container env3cont: STEP: delete the pod Oct 5 17:10:50.290: INFO: Waiting for pod client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0 to disappear Oct 5 17:10:50.297: INFO: Pod client-envvars-0e7db101-1782-4025-a5bc-3da2e1efc6b0 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:10:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8282" for this suite. • [SLOW TEST:8.756 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":71,"skipped":1200,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:10:50.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:10:50.835: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:10:52.846: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514650, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514650, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514650, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514650, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:10:55.892: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:10:56.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9656" for this suite. STEP: Destroying namespace "webhook-9656-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.366 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":72,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:10:56.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:12.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-836" for this suite. • [SLOW TEST:16.114 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":73,"skipped":1252,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:12.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:21.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5877" for this suite. • [SLOW TEST:8.273 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1253,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:27.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9157" for this suite. STEP: Destroying namespace "nsdeletetest-2052" for this suite. Oct 5 17:11:27.334: INFO: Namespace nsdeletetest-2052 was already deleted STEP: Destroying namespace "nsdeletetest-4058" for this suite. • [SLOW TEST:6.278 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":75,"skipped":1264,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:27.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:27.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9327" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":76,"skipped":1277,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:27.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 17:11:27.596: INFO: Waiting up to 5m0s for pod "downward-api-85d1827e-c692-47b5-9ee0-abe6da667297" in namespace "downward-api-2942" to be "Succeeded or Failed" Oct 5 17:11:27.618: INFO: Pod "downward-api-85d1827e-c692-47b5-9ee0-abe6da667297": Phase="Pending", Reason="", readiness=false. Elapsed: 21.521222ms Oct 5 17:11:29.624: INFO: Pod "downward-api-85d1827e-c692-47b5-9ee0-abe6da667297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027273035s Oct 5 17:11:31.629: INFO: Pod "downward-api-85d1827e-c692-47b5-9ee0-abe6da667297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032505259s STEP: Saw pod success Oct 5 17:11:31.629: INFO: Pod "downward-api-85d1827e-c692-47b5-9ee0-abe6da667297" satisfied condition "Succeeded or Failed" Oct 5 17:11:31.632: INFO: Trying to get logs from node latest-worker2 pod downward-api-85d1827e-c692-47b5-9ee0-abe6da667297 container dapi-container: STEP: delete the pod Oct 5 17:11:31.758: INFO: Waiting for pod downward-api-85d1827e-c692-47b5-9ee0-abe6da667297 to disappear Oct 5 17:11:31.852: INFO: Pod downward-api-85d1827e-c692-47b5-9ee0-abe6da667297 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:31.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2942" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1278,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:31.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:11:31.948: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:11:33.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:11:35.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:37.951: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:39.952: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:41.956: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:43.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:45.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:47.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:49.952: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:51.952: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = false) Oct 5 17:11:53.953: INFO: The status of Pod test-webserver-882654a0-b521-4f1c-bb4f-af8ecb9a7469 is Running (Ready = true) Oct 5 17:11:53.955: INFO: Container started at 2020-10-05 17:11:34 +0000 UTC, pod became ready at 2020-10-05 17:11:52 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:11:53.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2399" for this suite. • [SLOW TEST:22.085 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":78,"skipped":1278,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:11:53.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 17:12:00.557: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:00.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7623" for this suite. • [SLOW TEST:6.637 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":79,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:00.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 5 17:12:00.724: INFO: Waiting up to 5m0s for pod "pod-6a741947-79c8-49ca-844d-e125a4267575" in namespace "emptydir-9654" to be "Succeeded or Failed" Oct 5 17:12:00.767: INFO: Pod "pod-6a741947-79c8-49ca-844d-e125a4267575": Phase="Pending", Reason="", readiness=false. Elapsed: 43.209617ms Oct 5 17:12:02.775: INFO: Pod "pod-6a741947-79c8-49ca-844d-e125a4267575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050791923s Oct 5 17:12:04.779: INFO: Pod "pod-6a741947-79c8-49ca-844d-e125a4267575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054459844s STEP: Saw pod success Oct 5 17:12:04.779: INFO: Pod "pod-6a741947-79c8-49ca-844d-e125a4267575" satisfied condition "Succeeded or Failed" Oct 5 17:12:04.781: INFO: Trying to get logs from node latest-worker pod pod-6a741947-79c8-49ca-844d-e125a4267575 container test-container: STEP: delete the pod Oct 5 17:12:04.996: INFO: Waiting for pod pod-6a741947-79c8-49ca-844d-e125a4267575 to disappear Oct 5 17:12:05.040: INFO: Pod pod-6a741947-79c8-49ca-844d-e125a4267575 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:05.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9654" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1363,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:05.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 5 17:12:13.244: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 17:12:13.259: INFO: Pod pod-with-poststart-http-hook still exists Oct 5 17:12:15.259: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 17:12:15.264: INFO: Pod pod-with-poststart-http-hook still exists Oct 5 17:12:17.259: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 17:12:17.271: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:17.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2912" for this suite. • [SLOW TEST:12.231 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:17.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:12:18.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:12:20.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:12:22.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514738, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:12:25.248: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:12:25.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:26.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7461" for this suite. STEP: Destroying namespace "webhook-7461-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":82,"skipped":1413,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:26.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5213 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5213 STEP: Deleting pre-stop pod Oct 5 17:12:39.581: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:39.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5213" for this suite. • [SLOW TEST:13.155 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":83,"skipped":1414,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:39.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 17:12:39.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1422' Oct 5 17:12:40.125: INFO: stderr: "" Oct 5 17:12:40.125: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 5 17:12:45.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1422 -o json' Oct 5 17:12:45.296: INFO: stderr: "" Oct 5 17:12:45.296: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-05T17:12:40Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T17:12:39Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.198\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T17:12:42Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1422\",\n \"resourceVersion\": \"3397233\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1422/pods/e2e-test-httpd-pod\",\n \"uid\": \"76ed5dc3-ac1d-4e69-a090-ff003c3fb68f\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wpl8x\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wpl8x\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wpl8x\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T17:12:40Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T17:12:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T17:12:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T17:12:40Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8954781235ad7401fa6ef60d504d29afa7ea86f89dc77dd4b3bc868181031266\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-10-05T17:12:42Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.198\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.198\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-05T17:12:40Z\"\n }\n}\n" STEP: replace the image in the pod Oct 5 17:12:45.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1422' Oct 5 17:12:45.655: INFO: stderr: "" Oct 5 17:12:45.655: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Oct 5 17:12:45.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1422' Oct 5 17:12:59.880: INFO: stderr: "" Oct 5 17:12:59.880: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:12:59.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1422" for this suite. • [SLOW TEST:20.382 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":84,"skipped":1416,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:12:59.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 5 17:13:00.152: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:13:17.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6915" for this suite. • [SLOW TEST:17.562 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":85,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:13:17.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Oct 5 17:13:21.680: INFO: Pod pod-hostip-e48d9255-7ae8-47c9-a727-37c60a684739 has hostIP: 172.18.0.16 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:13:21.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-221" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1453,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:13:21.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1909/configmap-test-100322bd-9ebc-4d47-8db9-2e9de962306b STEP: Creating a pod to test consume configMaps Oct 5 17:13:21.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1" in namespace "configmap-1909" to be "Succeeded or Failed" Oct 5 17:13:21.865: INFO: Pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.489032ms Oct 5 17:13:23.876: INFO: Pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026215469s Oct 5 17:13:25.881: INFO: Pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03095351s Oct 5 17:13:27.885: INFO: Pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03557603s STEP: Saw pod success Oct 5 17:13:27.885: INFO: Pod "pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1" satisfied condition "Succeeded or Failed" Oct 5 17:13:27.889: INFO: Trying to get logs from node latest-worker pod pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1 container env-test: STEP: delete the pod Oct 5 17:13:27.942: INFO: Waiting for pod pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1 to disappear Oct 5 17:13:27.944: INFO: Pod pod-configmaps-cc760ea6-33ef-4687-b18a-23eec49963a1 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:13:27.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1909" for this suite. • [SLOW TEST:6.263 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:13:27.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Oct 5 17:13:28.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-1399 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 5 17:13:28.137: INFO: stderr: "" Oct 5 17:13:28.137: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Oct 5 17:13:28.137: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 5 17:13:28.137: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1399" to be "running and ready, or succeeded" Oct 5 17:13:28.166: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 29.128373ms Oct 5 17:13:30.171: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034748418s Oct 5 17:13:32.177: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.040329952s Oct 5 17:13:32.177: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 5 17:13:32.177: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 5 17:13:32.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399' Oct 5 17:13:32.301: INFO: stderr: "" Oct 5 17:13:32.301: INFO: stdout: "I1005 17:13:30.899465 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/mzsr 438\nI1005 17:13:31.099620 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/m4w 354\nI1005 17:13:31.299669 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/mvbc 479\nI1005 17:13:31.499563 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/8qz 386\nI1005 17:13:31.699637 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/prm8 471\nI1005 17:13:31.899618 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/25g 350\nI1005 17:13:32.099563 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/s94 378\n" STEP: limiting log lines Oct 5 17:13:32.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399 --tail=1' Oct 5 17:13:32.411: INFO: stderr: "" Oct 5 17:13:32.411: INFO: stdout: "I1005 17:13:32.299637 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jtx 529\n" Oct 5 17:13:32.411: INFO: got output "I1005 17:13:32.299637 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jtx 529\n" STEP: limiting log bytes Oct 5 17:13:32.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399 --limit-bytes=1' Oct 5 17:13:32.520: INFO: stderr: "" Oct 5 17:13:32.520: INFO: stdout: "I" Oct 5 17:13:32.520: INFO: got output "I" STEP: exposing timestamps Oct 5 17:13:32.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399 --tail=1 --timestamps' Oct 5 17:13:32.630: INFO: stderr: "" Oct 5 17:13:32.630: INFO: stdout: "2020-10-05T17:13:32.499788363Z I1005 17:13:32.499604 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/jd6w 511\n" Oct 5 17:13:32.630: INFO: got output "2020-10-05T17:13:32.499788363Z I1005 17:13:32.499604 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/jd6w 511\n" STEP: restricting to a time range Oct 5 17:13:35.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399 --since=1s' Oct 5 17:13:35.263: INFO: stderr: "" Oct 5 17:13:35.263: INFO: stdout: "I1005 17:13:34.299636 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/sssq 520\nI1005 17:13:34.499656 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/2w5f 226\nI1005 17:13:34.699649 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/82x 451\nI1005 17:13:34.899615 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/wp9k 329\nI1005 17:13:35.099592 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gd8 515\n" Oct 5 17:13:35.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1399 --since=24h' Oct 5 17:13:35.373: INFO: stderr: "" Oct 5 17:13:35.373: INFO: stdout: "I1005 17:13:30.899465 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/mzsr 438\nI1005 17:13:31.099620 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/m4w 354\nI1005 17:13:31.299669 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/mvbc 479\nI1005 17:13:31.499563 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/8qz 386\nI1005 17:13:31.699637 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/prm8 471\nI1005 17:13:31.899618 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/25g 350\nI1005 17:13:32.099563 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/s94 378\nI1005 17:13:32.299637 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jtx 529\nI1005 17:13:32.499604 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/jd6w 511\nI1005 17:13:32.699621 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/csmg 351\nI1005 17:13:32.899606 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/r47v 392\nI1005 17:13:33.099619 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/b4j 560\nI1005 17:13:33.299638 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/rr4 440\nI1005 17:13:33.499628 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rlhg 459\nI1005 17:13:33.699662 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/f2hp 485\nI1005 17:13:33.899601 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/xsd 324\nI1005 17:13:34.099623 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/h7l 407\nI1005 17:13:34.299636 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/sssq 520\nI1005 17:13:34.499656 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/2w5f 226\nI1005 17:13:34.699649 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/82x 451\nI1005 17:13:34.899615 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/wp9k 329\nI1005 17:13:35.099592 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/gd8 515\nI1005 17:13:35.299585 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/gss8 484\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Oct 5 17:13:35.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1399' Oct 5 17:13:39.817: INFO: stderr: "" Oct 5 17:13:39.817: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:13:39.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1399" for this suite. • [SLOW TEST:11.871 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":88,"skipped":1502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:13:39.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:13:39.986: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 5 17:13:44.990: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 17:13:44.990: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 17:13:45.044: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4540 /apis/apps/v1/namespaces/deployment-4540/deployments/test-cleanup-deployment e9028363-5d2c-4455-8100-b60219237571 3397567 1 2020-10-05 17:13:45 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-10-05 17:13:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048d4c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Oct 5 17:13:45.123: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-4540 /apis/apps/v1/namespaces/deployment-4540/replicasets/test-cleanup-deployment-5d446bdd47 89fb63e5-1dfb-4d7a-8efe-aa83fd31902d 3397569 1 2020-10-05 17:13:45 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e9028363-5d2c-4455-8100-b60219237571 0xc004be63e7 0xc004be63e8}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:13:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9028363-5d2c-4455-8100-b60219237571\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004be6498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:13:45.123: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Oct 5 17:13:45.123: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4540 /apis/apps/v1/namespaces/deployment-4540/replicasets/test-cleanup-controller 06d90dbc-478b-4f19-a0ca-f12a41a2b9bd 3397568 1 2020-10-05 17:13:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e9028363-5d2c-4455-8100-b60219237571 0xc004be628f 0xc004be62a0}] [] [{e2e.test Update apps/v1 2020-10-05 17:13:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:13:45 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e9028363-5d2c-4455-8100-b60219237571\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004be6368 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:13:45.211: INFO: Pod "test-cleanup-controller-d9qq7" is available: &Pod{ObjectMeta:{test-cleanup-controller-d9qq7 test-cleanup-controller- deployment-4540 /api/v1/namespaces/deployment-4540/pods/test-cleanup-controller-d9qq7 05d94853-73dd-40ab-a065-a5616169adc2 3397553 0 2020-10-05 17:13:39 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 06d90dbc-478b-4f19-a0ca-f12a41a2b9bd 0xc004be6a27 0xc004be6a28}] [] [{kube-controller-manager Update v1 2020-10-05 17:13:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06d90dbc-478b-4f19-a0ca-f12a41a2b9bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 17:13:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.190\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-np7qh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-np7qh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-np7qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:13:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:13:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:13:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.190,StartTime:2020-10-05 17:13:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 17:13:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d574ed6d2fe7e7284f89bf7f87ffdc2e892f55978ba281ff34c68729a18a11db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 17:13:45.211: INFO: Pod "test-cleanup-deployment-5d446bdd47-22nnx" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-22nnx test-cleanup-deployment-5d446bdd47- deployment-4540 /api/v1/namespaces/deployment-4540/pods/test-cleanup-deployment-5d446bdd47-22nnx 7742068a-f3ff-443f-9df1-5e14a50793a5 3397575 0 2020-10-05 17:13:45 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 89fb63e5-1dfb-4d7a-8efe-aa83fd31902d 0xc004be6be7 0xc004be6be8}] [] [{kube-controller-manager Update v1 2020-10-05 17:13:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fb63e5-1dfb-4d7a-8efe-aa83fd31902d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-np7qh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-np7qh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-np7qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:13:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:13:45.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4540" for this suite. • [SLOW TEST:5.485 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":89,"skipped":1536,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:13:45.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-e32a54a2-a07f-44af-b441-61eafa12c35a STEP: Creating configMap with name cm-test-opt-upd-4e6826e2-0c1b-4dd9-b235-4130c1da64e0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e32a54a2-a07f-44af-b441-61eafa12c35a STEP: Updating configmap cm-test-opt-upd-4e6826e2-0c1b-4dd9-b235-4130c1da64e0 STEP: Creating configMap with name cm-test-opt-create-30f88fb0-a843-44a8-b8bc-88de59622002 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:15:05.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9063" for this suite. • [SLOW TEST:80.067 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:15:05.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:15:06.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:15:08.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514906, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514906, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514906, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737514905, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:15:11.189: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 5 17:15:11.214: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:15:11.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3455" for this suite. STEP: Destroying namespace "webhook-3455-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.733 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":91,"skipped":1599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:15:12.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:15:17.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6310" for this suite. • [SLOW TEST:5.595 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":92,"skipped":1630,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:15:17.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-2cb374e6-603a-4ece-8561-e7f5ce8dd651 STEP: Creating a pod to test consume secrets Oct 5 17:15:17.783: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4" in namespace "projected-2538" to be "Succeeded or Failed" Oct 5 17:15:17.828: INFO: Pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.910659ms Oct 5 17:15:19.833: INFO: Pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050022152s Oct 5 17:15:21.838: INFO: Pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4": Phase="Running", Reason="", readiness=true. Elapsed: 4.054635673s Oct 5 17:15:23.843: INFO: Pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05974618s STEP: Saw pod success Oct 5 17:15:23.843: INFO: Pod "pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4" satisfied condition "Succeeded or Failed" Oct 5 17:15:23.846: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4 container secret-volume-test: STEP: delete the pod Oct 5 17:15:24.023: INFO: Waiting for pod pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4 to disappear Oct 5 17:15:24.739: INFO: Pod pod-projected-secrets-08b65aa9-750c-4854-9621-3a909ba152c4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:15:24.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2538" for this suite. • [SLOW TEST:7.044 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1636,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:15:24.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:15:24.813: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 5 17:15:27.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 create -f -' Oct 5 17:15:31.211: INFO: stderr: "" Oct 5 17:15:31.211: INFO: stdout: "e2e-test-crd-publish-openapi-9660-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 5 17:15:31.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 delete e2e-test-crd-publish-openapi-9660-crds test-foo' Oct 5 17:15:31.333: INFO: stderr: "" Oct 5 17:15:31.333: INFO: stdout: "e2e-test-crd-publish-openapi-9660-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 5 17:15:31.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 apply -f -' Oct 5 17:15:31.611: INFO: stderr: "" Oct 5 17:15:31.611: INFO: stdout: "e2e-test-crd-publish-openapi-9660-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 5 17:15:31.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 delete e2e-test-crd-publish-openapi-9660-crds test-foo' Oct 5 17:15:31.732: INFO: stderr: "" Oct 5 17:15:31.732: INFO: stdout: "e2e-test-crd-publish-openapi-9660-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 5 17:15:31.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 create -f -' Oct 5 17:15:32.037: INFO: rc: 1 Oct 5 17:15:32.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 apply -f -' Oct 5 17:15:32.326: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 5 17:15:32.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 create -f -' Oct 5 17:15:32.599: INFO: rc: 1 Oct 5 17:15:32.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1461 apply -f -' Oct 5 17:15:32.893: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 5 17:15:32.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9660-crds' Oct 5 17:15:33.194: INFO: stderr: "" Oct 5 17:15:33.194: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9660-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 5 17:15:33.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9660-crds.metadata' Oct 5 17:15:33.478: INFO: stderr: "" Oct 5 17:15:33.478: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9660-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 5 17:15:33.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9660-crds.spec' Oct 5 17:15:33.767: INFO: stderr: "" Oct 5 17:15:33.767: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9660-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 5 17:15:33.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9660-crds.spec.bars' Oct 5 17:15:34.046: INFO: stderr: "" Oct 5 17:15:34.046: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9660-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 5 17:15:34.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9660-crds.spec.bars2' Oct 5 17:15:34.334: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:15:38.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1461" for this suite. • [SLOW TEST:13.590 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":94,"skipped":1642,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:15:38.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 17:15:38.452: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 17:16:38.475: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 5 17:16:38.513: INFO: Created pod: pod0-sched-preemption-low-priority Oct 5 17:16:38.557: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:14.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3283" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:96.322 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":95,"skipped":1643,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:14.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ad686846-f185-46ad-a555-1748e71c1c17 STEP: Creating a pod to test consume configMaps Oct 5 17:17:14.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313" in namespace "configmap-8404" to be "Succeeded or Failed" Oct 5 17:17:14.799: INFO: Pod "pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313": Phase="Pending", Reason="", readiness=false. Elapsed: 40.056129ms Oct 5 17:17:16.804: INFO: Pod "pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044888878s Oct 5 17:17:18.809: INFO: Pod "pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049554285s STEP: Saw pod success Oct 5 17:17:18.809: INFO: Pod "pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313" satisfied condition "Succeeded or Failed" Oct 5 17:17:18.812: INFO: Trying to get logs from node latest-worker pod pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313 container configmap-volume-test: STEP: delete the pod Oct 5 17:17:18.839: INFO: Waiting for pod pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313 to disappear Oct 5 17:17:18.844: INFO: Pod pod-configmaps-59a0c6e7-0f8a-466c-a51d-e2946e6eb313 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8404" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1649,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:18.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-13475ee5-825e-4feb-98d3-ec267daa3ec8 STEP: Creating a pod to test consume configMaps Oct 5 17:17:18.939: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f" in namespace "projected-2706" to be "Succeeded or Failed" Oct 5 17:17:18.952: INFO: Pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.310305ms Oct 5 17:17:21.342: INFO: Pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402342893s Oct 5 17:17:23.398: INFO: Pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.458760237s Oct 5 17:17:25.402: INFO: Pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.462891863s STEP: Saw pod success Oct 5 17:17:25.402: INFO: Pod "pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f" satisfied condition "Succeeded or Failed" Oct 5 17:17:25.405: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f container projected-configmap-volume-test: STEP: delete the pod Oct 5 17:17:25.473: INFO: Waiting for pod pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f to disappear Oct 5 17:17:25.481: INFO: Pod pod-projected-configmaps-922548d1-0a82-4cc9-b034-c48949ca9b3f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:25.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2706" for this suite. • [SLOW TEST:6.633 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1653,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:25.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-415/configmap-test-ed0ebe7d-c5f9-4cba-9fa5-c3726c1a108d STEP: Creating a pod to test consume configMaps Oct 5 17:17:25.622: INFO: Waiting up to 5m0s for pod "pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d" in namespace "configmap-415" to be "Succeeded or Failed" Oct 5 17:17:25.637: INFO: Pod "pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.706385ms Oct 5 17:17:27.642: INFO: Pod "pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019848592s Oct 5 17:17:29.645: INFO: Pod "pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023790402s STEP: Saw pod success Oct 5 17:17:29.646: INFO: Pod "pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d" satisfied condition "Succeeded or Failed" Oct 5 17:17:29.648: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d container env-test: STEP: delete the pod Oct 5 17:17:29.712: INFO: Waiting for pod pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d to disappear Oct 5 17:17:29.740: INFO: Pod pod-configmaps-00560111-7013-4e7b-9f34-58bc8f799f7d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:29.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-415" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":98,"skipped":1669,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:29.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:17:30.500: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:17:32.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515050, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515050, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515050, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515050, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:17:35.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:48.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1502" for this suite. STEP: Destroying namespace "webhook-1502-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.566 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":99,"skipped":1685,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:48.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-07fbe95c-18a6-4c82-bb71-79985900556e STEP: Creating a pod to test consume configMaps Oct 5 17:17:48.467: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4" in namespace "projected-8321" to be "Succeeded or Failed" Oct 5 17:17:48.471: INFO: Pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.607215ms Oct 5 17:17:50.475: INFO: Pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007681384s Oct 5 17:17:54.073: INFO: Pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.605825638s Oct 5 17:17:56.079: INFO: Pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.612246736s STEP: Saw pod success Oct 5 17:17:56.080: INFO: Pod "pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4" satisfied condition "Succeeded or Failed" Oct 5 17:17:56.083: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4 container projected-configmap-volume-test: STEP: delete the pod Oct 5 17:17:56.189: INFO: Waiting for pod pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4 to disappear Oct 5 17:17:56.199: INFO: Pod pod-projected-configmaps-3f33f3b6-174e-4c1a-9e50-c63ee39d0cf4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:17:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8321" for this suite. • [SLOW TEST:7.952 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1705,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:17:56.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 5 17:17:57.053: INFO: Pod name wrapped-volume-race-85a733d3-b64a-458d-b84b-d84b946c7414: Found 0 pods out of 5 Oct 5 17:18:02.062: INFO: Pod name wrapped-volume-race-85a733d3-b64a-458d-b84b-d84b946c7414: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-85a733d3-b64a-458d-b84b-d84b946c7414 in namespace emptydir-wrapper-2762, will wait for the garbage collector to delete the pods Oct 5 17:18:18.156: INFO: Deleting ReplicationController wrapped-volume-race-85a733d3-b64a-458d-b84b-d84b946c7414 took: 8.219247ms Oct 5 17:18:18.756: INFO: Terminating ReplicationController wrapped-volume-race-85a733d3-b64a-458d-b84b-d84b946c7414 pods took: 600.236498ms STEP: Creating RC which spawns configmap-volume pods Oct 5 17:18:30.089: INFO: Pod name wrapped-volume-race-e6b14cc8-b3b5-488e-a848-97575d86ee95: Found 0 pods out of 5 Oct 5 17:18:35.098: INFO: Pod name wrapped-volume-race-e6b14cc8-b3b5-488e-a848-97575d86ee95: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e6b14cc8-b3b5-488e-a848-97575d86ee95 in namespace emptydir-wrapper-2762, will wait for the garbage collector to delete the pods Oct 5 17:18:49.181: INFO: Deleting ReplicationController wrapped-volume-race-e6b14cc8-b3b5-488e-a848-97575d86ee95 took: 10.644799ms Oct 5 17:18:49.582: INFO: Terminating ReplicationController wrapped-volume-race-e6b14cc8-b3b5-488e-a848-97575d86ee95 pods took: 400.131445ms STEP: Creating RC which spawns configmap-volume pods Oct 5 17:18:54.734: INFO: Pod name wrapped-volume-race-9bad834b-f7dc-4608-8617-d1c86d29fd71: Found 0 pods out of 5 Oct 5 17:18:59.746: INFO: Pod name wrapped-volume-race-9bad834b-f7dc-4608-8617-d1c86d29fd71: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9bad834b-f7dc-4608-8617-d1c86d29fd71 in namespace emptydir-wrapper-2762, will wait for the garbage collector to delete the pods Oct 5 17:19:11.856: INFO: Deleting ReplicationController wrapped-volume-race-9bad834b-f7dc-4608-8617-d1c86d29fd71 took: 8.294442ms Oct 5 17:19:12.056: INFO: Terminating ReplicationController wrapped-volume-race-9bad834b-f7dc-4608-8617-d1c86d29fd71 pods took: 200.235869ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:19:20.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2762" for this suite. • [SLOW TEST:84.502 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":101,"skipped":1719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:19:20.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-2180/secret-test-e2f72078-831d-4a15-a373-d1209f431b3c STEP: Creating a pod to test consume secrets Oct 5 17:19:20.993: INFO: Waiting up to 5m0s for pod "pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5" in namespace "secrets-2180" to be "Succeeded or Failed" Oct 5 17:19:21.002: INFO: Pod "pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690587ms Oct 5 17:19:23.007: INFO: Pod "pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013232621s Oct 5 17:19:25.010: INFO: Pod "pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01695646s STEP: Saw pod success Oct 5 17:19:25.011: INFO: Pod "pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5" satisfied condition "Succeeded or Failed" Oct 5 17:19:25.014: INFO: Trying to get logs from node latest-worker pod pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5 container env-test: STEP: delete the pod Oct 5 17:19:25.047: INFO: Waiting for pod pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5 to disappear Oct 5 17:19:25.056: INFO: Pod pod-configmaps-6876ea6a-60b1-4cba-b5d3-bbb37b2410d5 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:19:25.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2180" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":102,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:19:25.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-b7da0d4a-c896-400e-b1a4-bb89755a2bb5 in namespace container-probe-5887 Oct 5 17:19:29.164: INFO: Started pod liveness-b7da0d4a-c896-400e-b1a4-bb89755a2bb5 in namespace container-probe-5887 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 17:19:29.170: INFO: Initial restart count of pod liveness-b7da0d4a-c896-400e-b1a4-bb89755a2bb5 is 0 Oct 5 17:19:53.303: INFO: Restart count of pod container-probe-5887/liveness-b7da0d4a-c896-400e-b1a4-bb89755a2bb5 is now 1 (24.132810093s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:19:53.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5887" for this suite. • [SLOW TEST:28.275 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1774,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:19:53.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 17:19:57.467: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:19:57.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1213" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":104,"skipped":1785,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:19:57.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 5 17:19:57.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:19:57.626: INFO: Number of nodes with available pods: 0 Oct 5 17:19:57.627: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:19:58.651: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:19:58.655: INFO: Number of nodes with available pods: 0 Oct 5 17:19:58.655: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:19:59.632: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:19:59.636: INFO: Number of nodes with available pods: 0 Oct 5 17:19:59.636: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:20:00.633: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:00.635: INFO: Number of nodes with available pods: 0 Oct 5 17:20:00.635: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:20:01.664: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:01.668: INFO: Number of nodes with available pods: 1 Oct 5 17:20:01.668: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:02.643: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:02.660: INFO: Number of nodes with available pods: 2 Oct 5 17:20:02.660: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 5 17:20:02.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:02.702: INFO: Number of nodes with available pods: 1 Oct 5 17:20:02.702: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:03.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:03.732: INFO: Number of nodes with available pods: 1 Oct 5 17:20:03.732: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:04.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:04.713: INFO: Number of nodes with available pods: 1 Oct 5 17:20:04.713: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:05.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:05.713: INFO: Number of nodes with available pods: 1 Oct 5 17:20:05.713: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:06.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:06.711: INFO: Number of nodes with available pods: 1 Oct 5 17:20:06.711: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:07.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:07.710: INFO: Number of nodes with available pods: 1 Oct 5 17:20:07.710: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:08.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:08.711: INFO: Number of nodes with available pods: 1 Oct 5 17:20:08.711: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:09.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:09.712: INFO: Number of nodes with available pods: 1 Oct 5 17:20:09.712: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:10.708: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:10.711: INFO: Number of nodes with available pods: 1 Oct 5 17:20:10.711: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:11.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:11.713: INFO: Number of nodes with available pods: 1 Oct 5 17:20:11.713: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:12.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:12.713: INFO: Number of nodes with available pods: 1 Oct 5 17:20:12.713: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:20:13.709: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:20:13.713: INFO: Number of nodes with available pods: 2 Oct 5 17:20:13.713: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2453, will wait for the garbage collector to delete the pods Oct 5 17:20:13.778: INFO: Deleting DaemonSet.extensions daemon-set took: 8.301964ms Oct 5 17:20:14.278: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.251647ms Oct 5 17:20:19.881: INFO: Number of nodes with available pods: 0 Oct 5 17:20:19.881: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 17:20:19.887: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2453/daemonsets","resourceVersion":"3400167"},"items":null} Oct 5 17:20:19.890: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2453/pods","resourceVersion":"3400167"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:19.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2453" for this suite. • [SLOW TEST:22.393 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":105,"skipped":1789,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:19.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:20:19.995: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-2db4f70b-7e8a-4a3e-b4de-f55192dcc4f2" in namespace "security-context-test-5613" to be "Succeeded or Failed" Oct 5 17:20:19.998: INFO: Pod "busybox-readonly-false-2db4f70b-7e8a-4a3e-b4de-f55192dcc4f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468384ms Oct 5 17:20:22.005: INFO: Pod "busybox-readonly-false-2db4f70b-7e8a-4a3e-b4de-f55192dcc4f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009339243s Oct 5 17:20:24.010: INFO: Pod "busybox-readonly-false-2db4f70b-7e8a-4a3e-b4de-f55192dcc4f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014491813s Oct 5 17:20:24.010: INFO: Pod "busybox-readonly-false-2db4f70b-7e8a-4a3e-b4de-f55192dcc4f2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:24.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5613" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1803,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:24.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-91577e55-b52d-40f5-be06-ac0c39f9703d STEP: Creating a pod to test consume secrets Oct 5 17:20:24.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8" in namespace "projected-5946" to be "Succeeded or Failed" Oct 5 17:20:24.152: INFO: Pod "pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.180045ms Oct 5 17:20:26.171: INFO: Pod "pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026005782s Oct 5 17:20:28.174: INFO: Pod "pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029856303s STEP: Saw pod success Oct 5 17:20:28.174: INFO: Pod "pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8" satisfied condition "Succeeded or Failed" Oct 5 17:20:28.177: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8 container projected-secret-volume-test: STEP: delete the pod Oct 5 17:20:28.259: INFO: Waiting for pod pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8 to disappear Oct 5 17:20:28.284: INFO: Pod pod-projected-secrets-179cc795-6de7-45b9-b077-2c83f9b60cf8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:28.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5946" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":107,"skipped":1807,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:28.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-cb0f338f-9160-46c1-a39e-dcf551fed8e3 STEP: Creating a pod to test consume configMaps Oct 5 17:20:28.389: INFO: Waiting up to 5m0s for pod "pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a" in namespace "configmap-5852" to be "Succeeded or Failed" Oct 5 17:20:28.392: INFO: Pod "pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.653616ms Oct 5 17:20:30.395: INFO: Pod "pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006411435s Oct 5 17:20:32.400: INFO: Pod "pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01069786s STEP: Saw pod success Oct 5 17:20:32.400: INFO: Pod "pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a" satisfied condition "Succeeded or Failed" Oct 5 17:20:32.403: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a container configmap-volume-test: STEP: delete the pod Oct 5 17:20:32.441: INFO: Waiting for pod pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a to disappear Oct 5 17:20:32.453: INFO: Pod pod-configmaps-7930f90c-493b-4a5a-89bd-2feff7a8570a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:32.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5852" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1828,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:32.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-35c797b1-3e11-4726-bdf9-c7b6e549671f STEP: Creating a pod to test consume secrets Oct 5 17:20:32.567: INFO: Waiting up to 5m0s for pod "pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d" in namespace "secrets-6665" to be "Succeeded or Failed" Oct 5 17:20:32.573: INFO: Pod "pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.945206ms Oct 5 17:20:34.577: INFO: Pod "pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010004352s Oct 5 17:20:36.582: INFO: Pod "pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01505757s STEP: Saw pod success Oct 5 17:20:36.582: INFO: Pod "pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d" satisfied condition "Succeeded or Failed" Oct 5 17:20:36.586: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d container secret-volume-test: STEP: delete the pod Oct 5 17:20:36.613: INFO: Waiting for pod pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d to disappear Oct 5 17:20:36.621: INFO: Pod pod-secrets-e4bc520d-34cb-41af-9e5d-e8204c6b031d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:36.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6665" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1841,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:36.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:36.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8264" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":110,"skipped":1852,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:36.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 5 17:20:36.933: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 5 17:20:36.953: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 5 17:20:36.953: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 5 17:20:36.967: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 5 17:20:36.967: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 5 17:20:36.995: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 5 17:20:36.995: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 5 17:20:44.886: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:44.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-7045" for this suite. • [SLOW TEST:8.225 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":111,"skipped":1873,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:45.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:20:45.105: INFO: Creating ReplicaSet my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646 Oct 5 17:20:45.161: INFO: Pod name my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646: Found 0 pods out of 1 Oct 5 17:20:50.164: INFO: Pod name my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646: Found 1 pods out of 1 Oct 5 17:20:50.164: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646" is running Oct 5 17:20:50.171: INFO: Pod "my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646-22n5k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:20:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:20:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:20:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:20:45 +0000 UTC Reason: Message:}]) Oct 5 17:20:50.172: INFO: Trying to dial the pod Oct 5 17:20:55.184: INFO: Controller my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646: Got expected result from replica 1 [my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646-22n5k]: "my-hostname-basic-1b4cf378-5e0d-4179-a1a8-0633d5d96646-22n5k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:55.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4353" for this suite. • [SLOW TEST:10.175 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":112,"skipped":1883,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:55.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d77d2b2c-01ce-46ba-a060-d8a3a72f16ae STEP: Creating a pod to test consume secrets Oct 5 17:20:55.299: INFO: Waiting up to 5m0s for pod "pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117" in namespace "secrets-9273" to be "Succeeded or Failed" Oct 5 17:20:55.303: INFO: Pod "pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117": Phase="Pending", Reason="", readiness=false. Elapsed: 3.687303ms Oct 5 17:20:57.307: INFO: Pod "pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008162102s Oct 5 17:20:59.312: INFO: Pod "pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012866629s STEP: Saw pod success Oct 5 17:20:59.312: INFO: Pod "pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117" satisfied condition "Succeeded or Failed" Oct 5 17:20:59.320: INFO: Trying to get logs from node latest-worker pod pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117 container secret-volume-test: STEP: delete the pod Oct 5 17:20:59.368: INFO: Waiting for pod pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117 to disappear Oct 5 17:20:59.371: INFO: Pod pod-secrets-2d7d5174-4fc9-4ff5-b523-777b11a5e117 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:20:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9273" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:20:59.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f7d486dd-9204-4068-99e9-c09d963c895e STEP: Creating a pod to test consume secrets Oct 5 17:20:59.463: INFO: Waiting up to 5m0s for pod "pod-secrets-836da1fc-b554-4094-b178-a27133efe580" in namespace "secrets-1358" to be "Succeeded or Failed" Oct 5 17:20:59.474: INFO: Pod "pod-secrets-836da1fc-b554-4094-b178-a27133efe580": Phase="Pending", Reason="", readiness=false. Elapsed: 11.101429ms Oct 5 17:21:01.477: INFO: Pod "pod-secrets-836da1fc-b554-4094-b178-a27133efe580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014729712s Oct 5 17:21:03.481: INFO: Pod "pod-secrets-836da1fc-b554-4094-b178-a27133efe580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018649645s STEP: Saw pod success Oct 5 17:21:03.481: INFO: Pod "pod-secrets-836da1fc-b554-4094-b178-a27133efe580" satisfied condition "Succeeded or Failed" Oct 5 17:21:03.484: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-836da1fc-b554-4094-b178-a27133efe580 container secret-volume-test: STEP: delete the pod Oct 5 17:21:03.649: INFO: Waiting for pod pod-secrets-836da1fc-b554-4094-b178-a27133efe580 to disappear Oct 5 17:21:03.675: INFO: Pod pod-secrets-836da1fc-b554-4094-b178-a27133efe580 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:03.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1358" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:03.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:21:03.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9" in namespace "projected-2156" to be "Succeeded or Failed" Oct 5 17:21:03.914: INFO: Pod "downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.447652ms Oct 5 17:21:05.925: INFO: Pod "downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025729604s Oct 5 17:21:08.107: INFO: Pod "downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207995295s STEP: Saw pod success Oct 5 17:21:08.107: INFO: Pod "downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9" satisfied condition "Succeeded or Failed" Oct 5 17:21:08.110: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9 container client-container: STEP: delete the pod Oct 5 17:21:08.300: INFO: Waiting for pod downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9 to disappear Oct 5 17:21:08.310: INFO: Pod downwardapi-volume-a18e21c4-86eb-4cb8-a73b-97288c61fbd9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:08.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2156" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1984,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:08.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:21:08.420: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58" in namespace "projected-247" to be "Succeeded or Failed" Oct 5 17:21:08.472: INFO: Pod "downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58": Phase="Pending", Reason="", readiness=false. Elapsed: 52.000838ms Oct 5 17:21:10.476: INFO: Pod "downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056269153s Oct 5 17:21:12.481: INFO: Pod "downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060825411s STEP: Saw pod success Oct 5 17:21:12.481: INFO: Pod "downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58" satisfied condition "Succeeded or Failed" Oct 5 17:21:12.484: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58 container client-container: STEP: delete the pod Oct 5 17:21:12.525: INFO: Waiting for pod downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58 to disappear Oct 5 17:21:12.539: INFO: Pod downwardapi-volume-46920a14-2fa0-45bf-b109-b1b4a23c1b58 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:12.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-247" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":1992,"failed":0} SSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:12.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 5 17:21:12.668: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:12.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5894" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":117,"skipped":1998,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:12.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:21:12.788: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2263 I1005 17:21:12.804243 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2263, replica count: 1 I1005 17:21:13.854591 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:21:14.854836 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:21:15.855049 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:21:16.042: INFO: Created: latency-svc-clbh7 Oct 5 17:21:16.048: INFO: Got endpoints: latency-svc-clbh7 [93.638587ms] Oct 5 17:21:16.102: INFO: Created: latency-svc-sbfzr Oct 5 17:21:16.120: INFO: Got endpoints: latency-svc-sbfzr [70.961777ms] Oct 5 17:21:16.193: INFO: Created: latency-svc-g94b9 Oct 5 17:21:16.198: INFO: Got endpoints: latency-svc-g94b9 [149.003829ms] Oct 5 17:21:16.248: INFO: Created: latency-svc-g246z Oct 5 17:21:16.259: INFO: Got endpoints: latency-svc-g246z [210.843829ms] Oct 5 17:21:16.348: INFO: Created: latency-svc-cd8sp Oct 5 17:21:16.352: INFO: Got endpoints: latency-svc-cd8sp [303.083991ms] Oct 5 17:21:16.390: INFO: Created: latency-svc-8wrsx Oct 5 17:21:16.398: INFO: Got endpoints: latency-svc-8wrsx [349.010449ms] Oct 5 17:21:16.432: INFO: Created: latency-svc-bnhtb Oct 5 17:21:16.497: INFO: Got endpoints: latency-svc-bnhtb [447.965821ms] Oct 5 17:21:16.499: INFO: Created: latency-svc-wxj6j Oct 5 17:21:16.504: INFO: Got endpoints: latency-svc-wxj6j [455.110493ms] Oct 5 17:21:16.524: INFO: Created: latency-svc-5sslx Oct 5 17:21:16.555: INFO: Got endpoints: latency-svc-5sslx [505.570909ms] Oct 5 17:21:16.582: INFO: Created: latency-svc-z569x Oct 5 17:21:16.595: INFO: Got endpoints: latency-svc-z569x [546.030857ms] Oct 5 17:21:16.647: INFO: Created: latency-svc-wkwv7 Oct 5 17:21:16.680: INFO: Got endpoints: latency-svc-wkwv7 [631.265695ms] Oct 5 17:21:16.734: INFO: Created: latency-svc-f684k Oct 5 17:21:16.803: INFO: Got endpoints: latency-svc-f684k [753.780879ms] Oct 5 17:21:16.815: INFO: Created: latency-svc-nbwbx Oct 5 17:21:16.836: INFO: Got endpoints: latency-svc-nbwbx [787.033939ms] Oct 5 17:21:16.858: INFO: Created: latency-svc-cvc5d Oct 5 17:21:16.885: INFO: Got endpoints: latency-svc-cvc5d [835.472312ms] Oct 5 17:21:16.953: INFO: Created: latency-svc-b4p44 Oct 5 17:21:16.959: INFO: Got endpoints: latency-svc-b4p44 [910.367507ms] Oct 5 17:21:16.998: INFO: Created: latency-svc-rj2xd Oct 5 17:21:17.011: INFO: Got endpoints: latency-svc-rj2xd [961.739508ms] Oct 5 17:21:17.038: INFO: Created: latency-svc-lmbqn Oct 5 17:21:17.077: INFO: Got endpoints: latency-svc-lmbqn [957.567145ms] Oct 5 17:21:17.092: INFO: Created: latency-svc-llkt5 Oct 5 17:21:17.101: INFO: Got endpoints: latency-svc-llkt5 [903.509241ms] Oct 5 17:21:17.134: INFO: Created: latency-svc-jltgh Oct 5 17:21:17.149: INFO: Got endpoints: latency-svc-jltgh [889.965324ms] Oct 5 17:21:17.172: INFO: Created: latency-svc-4nj88 Oct 5 17:21:17.227: INFO: Got endpoints: latency-svc-4nj88 [875.553569ms] Oct 5 17:21:17.229: INFO: Created: latency-svc-lq4t7 Oct 5 17:21:17.272: INFO: Got endpoints: latency-svc-lq4t7 [874.164774ms] Oct 5 17:21:17.295: INFO: Created: latency-svc-hlqgx Oct 5 17:21:17.306: INFO: Got endpoints: latency-svc-hlqgx [809.386692ms] Oct 5 17:21:17.326: INFO: Created: latency-svc-qk529 Oct 5 17:21:17.395: INFO: Got endpoints: latency-svc-qk529 [890.670497ms] Oct 5 17:21:17.400: INFO: Created: latency-svc-6mkmg Oct 5 17:21:17.415: INFO: Got endpoints: latency-svc-6mkmg [860.046979ms] Oct 5 17:21:17.436: INFO: Created: latency-svc-j74jk Oct 5 17:21:17.451: INFO: Got endpoints: latency-svc-j74jk [856.083546ms] Oct 5 17:21:17.474: INFO: Created: latency-svc-qdlxl Oct 5 17:21:17.481: INFO: Got endpoints: latency-svc-qdlxl [800.748106ms] Oct 5 17:21:17.533: INFO: Created: latency-svc-bwr7z Oct 5 17:21:17.553: INFO: Got endpoints: latency-svc-bwr7z [750.684605ms] Oct 5 17:21:17.580: INFO: Created: latency-svc-9sd98 Oct 5 17:21:17.596: INFO: Got endpoints: latency-svc-9sd98 [760.192464ms] Oct 5 17:21:17.617: INFO: Created: latency-svc-qm4cf Oct 5 17:21:17.694: INFO: Got endpoints: latency-svc-qm4cf [809.641353ms] Oct 5 17:21:17.710: INFO: Created: latency-svc-mlvjj Oct 5 17:21:17.722: INFO: Got endpoints: latency-svc-mlvjj [763.216212ms] Oct 5 17:21:17.752: INFO: Created: latency-svc-68lh9 Oct 5 17:21:17.771: INFO: Got endpoints: latency-svc-68lh9 [760.028227ms] Oct 5 17:21:17.844: INFO: Created: latency-svc-prkpr Oct 5 17:21:17.850: INFO: Got endpoints: latency-svc-prkpr [772.206596ms] Oct 5 17:21:17.896: INFO: Created: latency-svc-k4clq Oct 5 17:21:17.940: INFO: Got endpoints: latency-svc-k4clq [838.781732ms] Oct 5 17:21:18.000: INFO: Created: latency-svc-qvx8p Oct 5 17:21:18.030: INFO: Got endpoints: latency-svc-qvx8p [880.476277ms] Oct 5 17:21:18.031: INFO: Created: latency-svc-qqlh7 Oct 5 17:21:18.055: INFO: Got endpoints: latency-svc-qqlh7 [827.41026ms] Oct 5 17:21:18.139: INFO: Created: latency-svc-4cjsn Oct 5 17:21:18.152: INFO: Got endpoints: latency-svc-4cjsn [879.660669ms] Oct 5 17:21:18.171: INFO: Created: latency-svc-sz2zv Oct 5 17:21:18.213: INFO: Got endpoints: latency-svc-sz2zv [907.128446ms] Oct 5 17:21:18.287: INFO: Created: latency-svc-lp27r Oct 5 17:21:18.303: INFO: Got endpoints: latency-svc-lp27r [908.260617ms] Oct 5 17:21:18.349: INFO: Created: latency-svc-zlgcn Oct 5 17:21:18.366: INFO: Got endpoints: latency-svc-zlgcn [951.121144ms] Oct 5 17:21:18.431: INFO: Created: latency-svc-tzdw9 Oct 5 17:21:18.435: INFO: Got endpoints: latency-svc-tzdw9 [984.190675ms] Oct 5 17:21:18.522: INFO: Created: latency-svc-s7hcb Oct 5 17:21:18.576: INFO: Got endpoints: latency-svc-s7hcb [1.094806301s] Oct 5 17:21:18.615: INFO: Created: latency-svc-5xfhb Oct 5 17:21:18.627: INFO: Got endpoints: latency-svc-5xfhb [1.073761653s] Oct 5 17:21:18.713: INFO: Created: latency-svc-s9dg4 Oct 5 17:21:18.716: INFO: Got endpoints: latency-svc-s9dg4 [1.120164495s] Oct 5 17:21:18.756: INFO: Created: latency-svc-qwbxh Oct 5 17:21:18.772: INFO: Got endpoints: latency-svc-qwbxh [1.077683373s] Oct 5 17:21:18.851: INFO: Created: latency-svc-tjsgl Oct 5 17:21:18.879: INFO: Got endpoints: latency-svc-tjsgl [1.156591487s] Oct 5 17:21:18.880: INFO: Created: latency-svc-5bzrm Oct 5 17:21:18.903: INFO: Got endpoints: latency-svc-5bzrm [1.131973245s] Oct 5 17:21:19.020: INFO: Created: latency-svc-bgbmz Oct 5 17:21:19.041: INFO: Got endpoints: latency-svc-bgbmz [1.19136989s] Oct 5 17:21:19.077: INFO: Created: latency-svc-cktmt Oct 5 17:21:19.091: INFO: Got endpoints: latency-svc-cktmt [1.150719361s] Oct 5 17:21:19.161: INFO: Created: latency-svc-7xjjh Oct 5 17:21:19.181: INFO: Got endpoints: latency-svc-7xjjh [1.151100216s] Oct 5 17:21:19.212: INFO: Created: latency-svc-7z2xd Oct 5 17:21:19.223: INFO: Got endpoints: latency-svc-7z2xd [1.168493433s] Oct 5 17:21:19.305: INFO: Created: latency-svc-vlrlb Oct 5 17:21:19.321: INFO: Got endpoints: latency-svc-vlrlb [1.169346098s] Oct 5 17:21:19.347: INFO: Created: latency-svc-djtz4 Oct 5 17:21:19.362: INFO: Got endpoints: latency-svc-djtz4 [1.148533352s] Oct 5 17:21:19.449: INFO: Created: latency-svc-t7xpv Oct 5 17:21:19.467: INFO: Got endpoints: latency-svc-t7xpv [1.163599637s] Oct 5 17:21:19.481: INFO: Created: latency-svc-tfsn9 Oct 5 17:21:19.502: INFO: Got endpoints: latency-svc-tfsn9 [1.136585952s] Oct 5 17:21:19.539: INFO: Created: latency-svc-lcnh9 Oct 5 17:21:19.581: INFO: Got endpoints: latency-svc-lcnh9 [1.145417254s] Oct 5 17:21:19.629: INFO: Created: latency-svc-t5vkk Oct 5 17:21:19.656: INFO: Got endpoints: latency-svc-t5vkk [1.080197958s] Oct 5 17:21:19.730: INFO: Created: latency-svc-slhrm Oct 5 17:21:19.734: INFO: Got endpoints: latency-svc-slhrm [1.106876113s] Oct 5 17:21:19.782: INFO: Created: latency-svc-tf9fw Oct 5 17:21:19.801: INFO: Got endpoints: latency-svc-tf9fw [1.084836275s] Oct 5 17:21:19.874: INFO: Created: latency-svc-s9b76 Oct 5 17:21:19.879: INFO: Got endpoints: latency-svc-s9b76 [1.106605157s] Oct 5 17:21:19.938: INFO: Created: latency-svc-9cdp2 Oct 5 17:21:19.952: INFO: Got endpoints: latency-svc-9cdp2 [1.072672625s] Oct 5 17:21:20.034: INFO: Created: latency-svc-lwppc Oct 5 17:21:20.079: INFO: Got endpoints: latency-svc-lwppc [1.176354506s] Oct 5 17:21:20.121: INFO: Created: latency-svc-sl2kx Oct 5 17:21:20.161: INFO: Got endpoints: latency-svc-sl2kx [1.119913998s] Oct 5 17:21:20.208: INFO: Created: latency-svc-7bs5d Oct 5 17:21:20.233: INFO: Got endpoints: latency-svc-7bs5d [1.141949536s] Oct 5 17:21:20.312: INFO: Created: latency-svc-966nf Oct 5 17:21:20.322: INFO: Got endpoints: latency-svc-966nf [1.140969487s] Oct 5 17:21:20.376: INFO: Created: latency-svc-5xf96 Oct 5 17:21:20.389: INFO: Got endpoints: latency-svc-5xf96 [1.165221927s] Oct 5 17:21:20.455: INFO: Created: latency-svc-xchcj Oct 5 17:21:20.463: INFO: Got endpoints: latency-svc-xchcj [1.141805221s] Oct 5 17:21:20.532: INFO: Created: latency-svc-7466d Oct 5 17:21:20.612: INFO: Got endpoints: latency-svc-7466d [1.249446571s] Oct 5 17:21:20.613: INFO: Created: latency-svc-dt5pj Oct 5 17:21:20.617: INFO: Got endpoints: latency-svc-dt5pj [1.150510004s] Oct 5 17:21:20.673: INFO: Created: latency-svc-gvsz4 Oct 5 17:21:20.706: INFO: Got endpoints: latency-svc-gvsz4 [1.203776913s] Oct 5 17:21:20.762: INFO: Created: latency-svc-skgcr Oct 5 17:21:20.767: INFO: Got endpoints: latency-svc-skgcr [1.186584842s] Oct 5 17:21:20.818: INFO: Created: latency-svc-l2sqj Oct 5 17:21:20.830: INFO: Got endpoints: latency-svc-l2sqj [1.174247017s] Oct 5 17:21:20.853: INFO: Created: latency-svc-dgrtq Oct 5 17:21:20.929: INFO: Got endpoints: latency-svc-dgrtq [1.194344395s] Oct 5 17:21:20.971: INFO: Created: latency-svc-sm9b4 Oct 5 17:21:20.985: INFO: Got endpoints: latency-svc-sm9b4 [1.183321998s] Oct 5 17:21:21.009: INFO: Created: latency-svc-vpxwr Oct 5 17:21:21.021: INFO: Got endpoints: latency-svc-vpxwr [1.142102031s] Oct 5 17:21:21.072: INFO: Created: latency-svc-dv8ql Oct 5 17:21:21.084: INFO: Got endpoints: latency-svc-dv8ql [1.131898999s] Oct 5 17:21:21.108: INFO: Created: latency-svc-d264q Oct 5 17:21:21.144: INFO: Got endpoints: latency-svc-d264q [1.064589463s] Oct 5 17:21:21.210: INFO: Created: latency-svc-q9sr8 Oct 5 17:21:21.215: INFO: Got endpoints: latency-svc-q9sr8 [1.053497133s] Oct 5 17:21:21.273: INFO: Created: latency-svc-qdcsd Oct 5 17:21:21.286: INFO: Got endpoints: latency-svc-qdcsd [1.053101022s] Oct 5 17:21:21.348: INFO: Created: latency-svc-qsh2w Oct 5 17:21:21.353: INFO: Got endpoints: latency-svc-qsh2w [1.030769711s] Oct 5 17:21:21.407: INFO: Created: latency-svc-6m5ph Oct 5 17:21:21.419: INFO: Got endpoints: latency-svc-6m5ph [1.029927932s] Oct 5 17:21:21.438: INFO: Created: latency-svc-p6w4m Oct 5 17:21:21.485: INFO: Got endpoints: latency-svc-p6w4m [1.022211495s] Oct 5 17:21:21.510: INFO: Created: latency-svc-cw92v Oct 5 17:21:21.521: INFO: Got endpoints: latency-svc-cw92v [909.546337ms] Oct 5 17:21:21.540: INFO: Created: latency-svc-58k4g Oct 5 17:21:21.569: INFO: Got endpoints: latency-svc-58k4g [951.99931ms] Oct 5 17:21:21.606: INFO: Created: latency-svc-tdnpd Oct 5 17:21:21.617: INFO: Got endpoints: latency-svc-tdnpd [911.106673ms] Oct 5 17:21:21.639: INFO: Created: latency-svc-959cv Oct 5 17:21:21.656: INFO: Got endpoints: latency-svc-959cv [888.771153ms] Oct 5 17:21:21.675: INFO: Created: latency-svc-b8ct8 Oct 5 17:21:21.690: INFO: Got endpoints: latency-svc-b8ct8 [859.623154ms] Oct 5 17:21:21.749: INFO: Created: latency-svc-vj5kz Oct 5 17:21:21.759: INFO: Got endpoints: latency-svc-vj5kz [830.040207ms] Oct 5 17:21:21.804: INFO: Created: latency-svc-j2864 Oct 5 17:21:21.833: INFO: Got endpoints: latency-svc-j2864 [847.861662ms] Oct 5 17:21:21.899: INFO: Created: latency-svc-wqcrj Oct 5 17:21:21.910: INFO: Got endpoints: latency-svc-wqcrj [888.811333ms] Oct 5 17:21:21.939: INFO: Created: latency-svc-j67dm Oct 5 17:21:21.978: INFO: Got endpoints: latency-svc-j67dm [893.689072ms] Oct 5 17:21:21.996: INFO: Created: latency-svc-qshmt Oct 5 17:21:22.030: INFO: Got endpoints: latency-svc-qshmt [885.953308ms] Oct 5 17:21:22.047: INFO: Created: latency-svc-mnv29 Oct 5 17:21:22.073: INFO: Got endpoints: latency-svc-mnv29 [857.848918ms] Oct 5 17:21:22.089: INFO: Created: latency-svc-zccdq Oct 5 17:21:22.103: INFO: Got endpoints: latency-svc-zccdq [816.390777ms] Oct 5 17:21:22.119: INFO: Created: latency-svc-jn8h2 Oct 5 17:21:22.162: INFO: Got endpoints: latency-svc-jn8h2 [808.393789ms] Oct 5 17:21:22.175: INFO: Created: latency-svc-xwxm2 Oct 5 17:21:22.187: INFO: Got endpoints: latency-svc-xwxm2 [768.761042ms] Oct 5 17:21:22.205: INFO: Created: latency-svc-kjkrn Oct 5 17:21:22.218: INFO: Got endpoints: latency-svc-kjkrn [732.419973ms] Oct 5 17:21:22.236: INFO: Created: latency-svc-q6pt9 Oct 5 17:21:22.248: INFO: Got endpoints: latency-svc-q6pt9 [726.966792ms] Oct 5 17:21:22.305: INFO: Created: latency-svc-v6pbf Oct 5 17:21:22.309: INFO: Got endpoints: latency-svc-v6pbf [739.22945ms] Oct 5 17:21:22.378: INFO: Created: latency-svc-sdlz7 Oct 5 17:21:22.404: INFO: Got endpoints: latency-svc-sdlz7 [786.350773ms] Oct 5 17:21:22.461: INFO: Created: latency-svc-m4d5j Oct 5 17:21:22.483: INFO: Got endpoints: latency-svc-m4d5j [826.503657ms] Oct 5 17:21:22.540: INFO: Created: latency-svc-9xwlr Oct 5 17:21:22.555: INFO: Got endpoints: latency-svc-9xwlr [864.943347ms] Oct 5 17:21:22.605: INFO: Created: latency-svc-mzrv6 Oct 5 17:21:22.618: INFO: Got endpoints: latency-svc-mzrv6 [858.842333ms] Oct 5 17:21:22.650: INFO: Created: latency-svc-t4tqj Oct 5 17:21:22.663: INFO: Got endpoints: latency-svc-t4tqj [830.625104ms] Oct 5 17:21:22.691: INFO: Created: latency-svc-5tfd7 Oct 5 17:21:22.700: INFO: Got endpoints: latency-svc-5tfd7 [790.107662ms] Oct 5 17:21:22.743: INFO: Created: latency-svc-g5lj7 Oct 5 17:21:22.748: INFO: Got endpoints: latency-svc-g5lj7 [770.551684ms] Oct 5 17:21:22.766: INFO: Created: latency-svc-9g4dj Oct 5 17:21:22.778: INFO: Got endpoints: latency-svc-9g4dj [748.417686ms] Oct 5 17:21:22.809: INFO: Created: latency-svc-v88fq Oct 5 17:21:22.833: INFO: Got endpoints: latency-svc-v88fq [760.227374ms] Oct 5 17:21:22.918: INFO: Created: latency-svc-89nzv Oct 5 17:21:22.955: INFO: Got endpoints: latency-svc-89nzv [851.996617ms] Oct 5 17:21:23.001: INFO: Created: latency-svc-dpnq5 Oct 5 17:21:23.084: INFO: Got endpoints: latency-svc-dpnq5 [921.906318ms] Oct 5 17:21:23.094: INFO: Created: latency-svc-9dxx7 Oct 5 17:21:23.110: INFO: Got endpoints: latency-svc-9dxx7 [922.304577ms] Oct 5 17:21:23.129: INFO: Created: latency-svc-5cmbg Oct 5 17:21:23.140: INFO: Got endpoints: latency-svc-5cmbg [922.003781ms] Oct 5 17:21:23.159: INFO: Created: latency-svc-7lx6g Oct 5 17:21:23.170: INFO: Got endpoints: latency-svc-7lx6g [921.962676ms] Oct 5 17:21:23.228: INFO: Created: latency-svc-cmt6k Oct 5 17:21:23.236: INFO: Got endpoints: latency-svc-cmt6k [927.885704ms] Oct 5 17:21:23.270: INFO: Created: latency-svc-4rm8h Oct 5 17:21:23.285: INFO: Got endpoints: latency-svc-4rm8h [881.60472ms] Oct 5 17:21:23.309: INFO: Created: latency-svc-fs975 Oct 5 17:21:23.324: INFO: Got endpoints: latency-svc-fs975 [841.039225ms] Oct 5 17:21:23.372: INFO: Created: latency-svc-jfr5z Oct 5 17:21:23.376: INFO: Got endpoints: latency-svc-jfr5z [820.658244ms] Oct 5 17:21:23.403: INFO: Created: latency-svc-x4h22 Oct 5 17:21:23.414: INFO: Got endpoints: latency-svc-x4h22 [796.48593ms] Oct 5 17:21:23.439: INFO: Created: latency-svc-pr9hc Oct 5 17:21:23.452: INFO: Got endpoints: latency-svc-pr9hc [788.203142ms] Oct 5 17:21:23.545: INFO: Created: latency-svc-p4vw9 Oct 5 17:21:23.550: INFO: Got endpoints: latency-svc-p4vw9 [849.794929ms] Oct 5 17:21:23.573: INFO: Created: latency-svc-tmh7c Oct 5 17:21:23.589: INFO: Got endpoints: latency-svc-tmh7c [840.729563ms] Oct 5 17:21:23.609: INFO: Created: latency-svc-lfs6l Oct 5 17:21:23.626: INFO: Got endpoints: latency-svc-lfs6l [847.625871ms] Oct 5 17:21:23.643: INFO: Created: latency-svc-kv4v2 Oct 5 17:21:23.695: INFO: Got endpoints: latency-svc-kv4v2 [861.604999ms] Oct 5 17:21:23.699: INFO: Created: latency-svc-2sgqx Oct 5 17:21:23.705: INFO: Got endpoints: latency-svc-2sgqx [749.993318ms] Oct 5 17:21:23.723: INFO: Created: latency-svc-gcwbv Oct 5 17:21:23.734: INFO: Got endpoints: latency-svc-gcwbv [650.881245ms] Oct 5 17:21:23.753: INFO: Created: latency-svc-qqmmd Oct 5 17:21:23.765: INFO: Got endpoints: latency-svc-qqmmd [655.282261ms] Oct 5 17:21:23.875: INFO: Created: latency-svc-f2zr4 Oct 5 17:21:23.901: INFO: Got endpoints: latency-svc-f2zr4 [761.37393ms] Oct 5 17:21:23.925: INFO: Created: latency-svc-ffwqp Oct 5 17:21:23.940: INFO: Got endpoints: latency-svc-ffwqp [769.314049ms] Oct 5 17:21:23.964: INFO: Created: latency-svc-t7xbl Oct 5 17:21:24.018: INFO: Got endpoints: latency-svc-t7xbl [781.234473ms] Oct 5 17:21:24.042: INFO: Created: latency-svc-4wf2v Oct 5 17:21:24.054: INFO: Got endpoints: latency-svc-4wf2v [768.499204ms] Oct 5 17:21:24.077: INFO: Created: latency-svc-9629h Oct 5 17:21:24.091: INFO: Got endpoints: latency-svc-9629h [766.645189ms] Oct 5 17:21:24.111: INFO: Created: latency-svc-tm7dq Oct 5 17:21:24.144: INFO: Got endpoints: latency-svc-tm7dq [767.878977ms] Oct 5 17:21:24.161: INFO: Created: latency-svc-wrn7g Oct 5 17:21:24.176: INFO: Got endpoints: latency-svc-wrn7g [762.166792ms] Oct 5 17:21:24.203: INFO: Created: latency-svc-r4695 Oct 5 17:21:24.217: INFO: Got endpoints: latency-svc-r4695 [765.664027ms] Oct 5 17:21:24.239: INFO: Created: latency-svc-9ggww Oct 5 17:21:24.305: INFO: Got endpoints: latency-svc-9ggww [755.395344ms] Oct 5 17:21:24.308: INFO: Created: latency-svc-kd97b Oct 5 17:21:24.314: INFO: Got endpoints: latency-svc-kd97b [724.507526ms] Oct 5 17:21:24.333: INFO: Created: latency-svc-vvhtt Oct 5 17:21:24.344: INFO: Got endpoints: latency-svc-vvhtt [717.778133ms] Oct 5 17:21:24.362: INFO: Created: latency-svc-hdshw Oct 5 17:21:24.395: INFO: Got endpoints: latency-svc-hdshw [700.111704ms] Oct 5 17:21:24.461: INFO: Created: latency-svc-2lc6w Oct 5 17:21:24.465: INFO: Got endpoints: latency-svc-2lc6w [760.543144ms] Oct 5 17:21:24.500: INFO: Created: latency-svc-dcjhf Oct 5 17:21:24.525: INFO: Got endpoints: latency-svc-dcjhf [790.352724ms] Oct 5 17:21:24.554: INFO: Created: latency-svc-g7v6w Oct 5 17:21:24.617: INFO: Got endpoints: latency-svc-g7v6w [851.930462ms] Oct 5 17:21:24.622: INFO: Created: latency-svc-2dcnw Oct 5 17:21:24.627: INFO: Got endpoints: latency-svc-2dcnw [726.294787ms] Oct 5 17:21:24.647: INFO: Created: latency-svc-9flcc Oct 5 17:21:24.658: INFO: Got endpoints: latency-svc-9flcc [718.354606ms] Oct 5 17:21:24.678: INFO: Created: latency-svc-kzndc Oct 5 17:21:24.691: INFO: Got endpoints: latency-svc-kzndc [673.55901ms] Oct 5 17:21:24.710: INFO: Created: latency-svc-rwhd9 Oct 5 17:21:24.761: INFO: Got endpoints: latency-svc-rwhd9 [706.789174ms] Oct 5 17:21:24.777: INFO: Created: latency-svc-dp9g4 Oct 5 17:21:24.795: INFO: Got endpoints: latency-svc-dp9g4 [703.856356ms] Oct 5 17:21:24.815: INFO: Created: latency-svc-thfnb Oct 5 17:21:24.839: INFO: Got endpoints: latency-svc-thfnb [695.08549ms] Oct 5 17:21:24.923: INFO: Created: latency-svc-mnj2n Oct 5 17:21:24.969: INFO: Created: latency-svc-8x82z Oct 5 17:21:24.969: INFO: Got endpoints: latency-svc-mnj2n [792.459401ms] Oct 5 17:21:25.004: INFO: Got endpoints: latency-svc-8x82z [786.937721ms] Oct 5 17:21:25.085: INFO: Created: latency-svc-9rv9x Oct 5 17:21:25.097: INFO: Got endpoints: latency-svc-9rv9x [792.057245ms] Oct 5 17:21:25.115: INFO: Created: latency-svc-wldcz Oct 5 17:21:25.125: INFO: Got endpoints: latency-svc-wldcz [811.21746ms] Oct 5 17:21:25.172: INFO: Created: latency-svc-v8qxk Oct 5 17:21:25.245: INFO: Got endpoints: latency-svc-v8qxk [901.368792ms] Oct 5 17:21:25.274: INFO: Created: latency-svc-mxxzj Oct 5 17:21:25.288: INFO: Got endpoints: latency-svc-mxxzj [893.350908ms] Oct 5 17:21:25.325: INFO: Created: latency-svc-ss5m9 Oct 5 17:21:25.342: INFO: Got endpoints: latency-svc-ss5m9 [876.850427ms] Oct 5 17:21:25.401: INFO: Created: latency-svc-mz7qb Oct 5 17:21:25.430: INFO: Got endpoints: latency-svc-mz7qb [905.274291ms] Oct 5 17:21:25.467: INFO: Created: latency-svc-prfzg Oct 5 17:21:25.480: INFO: Got endpoints: latency-svc-prfzg [863.009254ms] Oct 5 17:21:25.497: INFO: Created: latency-svc-mfk4h Oct 5 17:21:25.550: INFO: Got endpoints: latency-svc-mfk4h [922.807849ms] Oct 5 17:21:25.583: INFO: Created: latency-svc-h4sww Oct 5 17:21:25.604: INFO: Got endpoints: latency-svc-h4sww [946.180586ms] Oct 5 17:21:25.641: INFO: Created: latency-svc-zfb88 Oct 5 17:21:25.683: INFO: Got endpoints: latency-svc-zfb88 [991.100329ms] Oct 5 17:21:25.694: INFO: Created: latency-svc-7pw76 Oct 5 17:21:25.710: INFO: Got endpoints: latency-svc-7pw76 [948.533783ms] Oct 5 17:21:25.771: INFO: Created: latency-svc-dms57 Oct 5 17:21:25.782: INFO: Got endpoints: latency-svc-dms57 [987.462444ms] Oct 5 17:21:25.821: INFO: Created: latency-svc-z7pdv Oct 5 17:21:25.831: INFO: Got endpoints: latency-svc-z7pdv [992.308593ms] Oct 5 17:21:25.856: INFO: Created: latency-svc-4bq7n Oct 5 17:21:25.879: INFO: Got endpoints: latency-svc-4bq7n [910.041229ms] Oct 5 17:21:25.898: INFO: Created: latency-svc-8ql69 Oct 5 17:21:25.909: INFO: Got endpoints: latency-svc-8ql69 [904.11837ms] Oct 5 17:21:25.952: INFO: Created: latency-svc-cgwfh Oct 5 17:21:25.991: INFO: Got endpoints: latency-svc-cgwfh [893.294946ms] Oct 5 17:21:25.991: INFO: Created: latency-svc-4mn96 Oct 5 17:21:26.015: INFO: Got endpoints: latency-svc-4mn96 [889.485247ms] Oct 5 17:21:26.045: INFO: Created: latency-svc-ssz65 Oct 5 17:21:26.096: INFO: Got endpoints: latency-svc-ssz65 [850.120161ms] Oct 5 17:21:26.120: INFO: Created: latency-svc-jt96v Oct 5 17:21:26.138: INFO: Got endpoints: latency-svc-jt96v [850.103678ms] Oct 5 17:21:26.156: INFO: Created: latency-svc-9fcf9 Oct 5 17:21:26.182: INFO: Got endpoints: latency-svc-9fcf9 [840.109208ms] Oct 5 17:21:26.257: INFO: Created: latency-svc-26p2d Oct 5 17:21:26.271: INFO: Got endpoints: latency-svc-26p2d [840.452898ms] Oct 5 17:21:26.291: INFO: Created: latency-svc-8l28c Oct 5 17:21:26.304: INFO: Got endpoints: latency-svc-8l28c [823.465464ms] Oct 5 17:21:26.324: INFO: Created: latency-svc-s2ws8 Oct 5 17:21:26.346: INFO: Got endpoints: latency-svc-s2ws8 [795.209345ms] Oct 5 17:21:26.482: INFO: Created: latency-svc-t7rrc Oct 5 17:21:26.490: INFO: Got endpoints: latency-svc-t7rrc [885.398111ms] Oct 5 17:21:26.522: INFO: Created: latency-svc-j2mhl Oct 5 17:21:26.571: INFO: Got endpoints: latency-svc-j2mhl [888.351716ms] Oct 5 17:21:26.630: INFO: Created: latency-svc-jgxgk Oct 5 17:21:26.657: INFO: Got endpoints: latency-svc-jgxgk [947.804287ms] Oct 5 17:21:26.658: INFO: Created: latency-svc-wx2fz Oct 5 17:21:26.693: INFO: Got endpoints: latency-svc-wx2fz [910.871211ms] Oct 5 17:21:26.728: INFO: Created: latency-svc-rdx9f Oct 5 17:21:26.772: INFO: Got endpoints: latency-svc-rdx9f [940.88069ms] Oct 5 17:21:26.786: INFO: Created: latency-svc-k5v8n Oct 5 17:21:26.803: INFO: Got endpoints: latency-svc-k5v8n [924.276344ms] Oct 5 17:21:26.823: INFO: Created: latency-svc-t2k8b Oct 5 17:21:26.833: INFO: Got endpoints: latency-svc-t2k8b [924.552807ms] Oct 5 17:21:26.861: INFO: Created: latency-svc-rkjcz Oct 5 17:21:26.911: INFO: Got endpoints: latency-svc-rkjcz [919.917046ms] Oct 5 17:21:26.915: INFO: Created: latency-svc-mrlk8 Oct 5 17:21:26.930: INFO: Got endpoints: latency-svc-mrlk8 [914.96634ms] Oct 5 17:21:26.955: INFO: Created: latency-svc-v2qp5 Oct 5 17:21:26.966: INFO: Got endpoints: latency-svc-v2qp5 [870.513286ms] Oct 5 17:21:26.990: INFO: Created: latency-svc-tlrxf Oct 5 17:21:27.090: INFO: Got endpoints: latency-svc-tlrxf [951.3326ms] Oct 5 17:21:27.096: INFO: Created: latency-svc-6xrs8 Oct 5 17:21:27.098: INFO: Got endpoints: latency-svc-6xrs8 [916.106559ms] Oct 5 17:21:27.146: INFO: Created: latency-svc-n7kxm Oct 5 17:21:27.159: INFO: Got endpoints: latency-svc-n7kxm [888.383031ms] Oct 5 17:21:27.176: INFO: Created: latency-svc-7vg8k Oct 5 17:21:27.228: INFO: Got endpoints: latency-svc-7vg8k [924.27876ms] Oct 5 17:21:27.242: INFO: Created: latency-svc-rz9dp Oct 5 17:21:27.256: INFO: Got endpoints: latency-svc-rz9dp [910.0715ms] Oct 5 17:21:27.281: INFO: Created: latency-svc-zx7hj Oct 5 17:21:27.311: INFO: Got endpoints: latency-svc-zx7hj [821.581176ms] Oct 5 17:21:27.372: INFO: Created: latency-svc-pd8k6 Oct 5 17:21:27.388: INFO: Got endpoints: latency-svc-pd8k6 [817.174737ms] Oct 5 17:21:27.417: INFO: Created: latency-svc-vc2dt Oct 5 17:21:27.440: INFO: Got endpoints: latency-svc-vc2dt [782.622807ms] Oct 5 17:21:27.491: INFO: Created: latency-svc-tnfbb Oct 5 17:21:27.497: INFO: Got endpoints: latency-svc-tnfbb [803.447474ms] Oct 5 17:21:27.527: INFO: Created: latency-svc-bxjpw Oct 5 17:21:27.539: INFO: Got endpoints: latency-svc-bxjpw [766.443546ms] Oct 5 17:21:27.556: INFO: Created: latency-svc-8wwzc Oct 5 17:21:27.584: INFO: Got endpoints: latency-svc-8wwzc [780.613438ms] Oct 5 17:21:27.633: INFO: Created: latency-svc-fbg69 Oct 5 17:21:27.649: INFO: Got endpoints: latency-svc-fbg69 [816.12208ms] Oct 5 17:21:27.675: INFO: Created: latency-svc-frwh7 Oct 5 17:21:27.696: INFO: Got endpoints: latency-svc-frwh7 [785.207721ms] Oct 5 17:21:27.718: INFO: Created: latency-svc-jf2h9 Oct 5 17:21:27.785: INFO: Got endpoints: latency-svc-jf2h9 [854.765746ms] Oct 5 17:21:27.812: INFO: Created: latency-svc-q79sx Oct 5 17:21:27.841: INFO: Got endpoints: latency-svc-q79sx [874.877374ms] Oct 5 17:21:27.929: INFO: Created: latency-svc-6bbhg Oct 5 17:21:27.941: INFO: Got endpoints: latency-svc-6bbhg [850.869719ms] Oct 5 17:21:27.958: INFO: Created: latency-svc-mzz82 Oct 5 17:21:27.982: INFO: Got endpoints: latency-svc-mzz82 [883.831448ms] Oct 5 17:21:28.007: INFO: Created: latency-svc-q6xll Oct 5 17:21:28.025: INFO: Got endpoints: latency-svc-q6xll [865.417043ms] Oct 5 17:21:28.084: INFO: Created: latency-svc-57wqx Oct 5 17:21:28.101: INFO: Got endpoints: latency-svc-57wqx [872.389352ms] Oct 5 17:21:28.130: INFO: Created: latency-svc-9k9bk Oct 5 17:21:28.151: INFO: Got endpoints: latency-svc-9k9bk [895.711183ms] Oct 5 17:21:28.152: INFO: Latencies: [70.961777ms 149.003829ms 210.843829ms 303.083991ms 349.010449ms 447.965821ms 455.110493ms 505.570909ms 546.030857ms 631.265695ms 650.881245ms 655.282261ms 673.55901ms 695.08549ms 700.111704ms 703.856356ms 706.789174ms 717.778133ms 718.354606ms 724.507526ms 726.294787ms 726.966792ms 732.419973ms 739.22945ms 748.417686ms 749.993318ms 750.684605ms 753.780879ms 755.395344ms 760.028227ms 760.192464ms 760.227374ms 760.543144ms 761.37393ms 762.166792ms 763.216212ms 765.664027ms 766.443546ms 766.645189ms 767.878977ms 768.499204ms 768.761042ms 769.314049ms 770.551684ms 772.206596ms 780.613438ms 781.234473ms 782.622807ms 785.207721ms 786.350773ms 786.937721ms 787.033939ms 788.203142ms 790.107662ms 790.352724ms 792.057245ms 792.459401ms 795.209345ms 796.48593ms 800.748106ms 803.447474ms 808.393789ms 809.386692ms 809.641353ms 811.21746ms 816.12208ms 816.390777ms 817.174737ms 820.658244ms 821.581176ms 823.465464ms 826.503657ms 827.41026ms 830.040207ms 830.625104ms 835.472312ms 838.781732ms 840.109208ms 840.452898ms 840.729563ms 841.039225ms 847.625871ms 847.861662ms 849.794929ms 850.103678ms 850.120161ms 850.869719ms 851.930462ms 851.996617ms 854.765746ms 856.083546ms 857.848918ms 858.842333ms 859.623154ms 860.046979ms 861.604999ms 863.009254ms 864.943347ms 865.417043ms 870.513286ms 872.389352ms 874.164774ms 874.877374ms 875.553569ms 876.850427ms 879.660669ms 880.476277ms 881.60472ms 883.831448ms 885.398111ms 885.953308ms 888.351716ms 888.383031ms 888.771153ms 888.811333ms 889.485247ms 889.965324ms 890.670497ms 893.294946ms 893.350908ms 893.689072ms 895.711183ms 901.368792ms 903.509241ms 904.11837ms 905.274291ms 907.128446ms 908.260617ms 909.546337ms 910.041229ms 910.0715ms 910.367507ms 910.871211ms 911.106673ms 914.96634ms 916.106559ms 919.917046ms 921.906318ms 921.962676ms 922.003781ms 922.304577ms 922.807849ms 924.276344ms 924.27876ms 924.552807ms 927.885704ms 940.88069ms 946.180586ms 947.804287ms 948.533783ms 951.121144ms 951.3326ms 951.99931ms 957.567145ms 961.739508ms 984.190675ms 987.462444ms 991.100329ms 992.308593ms 1.022211495s 1.029927932s 1.030769711s 1.053101022s 1.053497133s 1.064589463s 1.072672625s 1.073761653s 1.077683373s 1.080197958s 1.084836275s 1.094806301s 1.106605157s 1.106876113s 1.119913998s 1.120164495s 1.131898999s 1.131973245s 1.136585952s 1.140969487s 1.141805221s 1.141949536s 1.142102031s 1.145417254s 1.148533352s 1.150510004s 1.150719361s 1.151100216s 1.156591487s 1.163599637s 1.165221927s 1.168493433s 1.169346098s 1.174247017s 1.176354506s 1.183321998s 1.186584842s 1.19136989s 1.194344395s 1.203776913s 1.249446571s] Oct 5 17:21:28.152: INFO: 50 %ile: 872.389352ms Oct 5 17:21:28.152: INFO: 90 %ile: 1.141949536s Oct 5 17:21:28.152: INFO: 99 %ile: 1.203776913s Oct 5 17:21:28.152: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:28.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2263" for this suite. • [SLOW TEST:15.532 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":118,"skipped":2016,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:28.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-856 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 17:21:28.279: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 17:21:28.360: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:21:30.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:21:32.368: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:21:34.529: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:36.376: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:38.368: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:40.377: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:42.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:44.364: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:46.417: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:48.363: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:50.384: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:21:52.370: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 17:21:52.376: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 17:21:56.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.228:8080/dial?request=hostname&protocol=udp&host=10.244.1.227&port=8081&tries=1'] Namespace:pod-network-test-856 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:21:56.449: INFO: >>> kubeConfig: /root/.kube/config I1005 17:21:56.517727 7 log.go:181] (0xc0029d0000) (0xc00236ea00) Create stream I1005 17:21:56.517759 7 log.go:181] (0xc0029d0000) (0xc00236ea00) Stream added, broadcasting: 1 I1005 17:21:56.522409 7 log.go:181] (0xc0029d0000) Reply frame received for 1 I1005 17:21:56.522456 7 log.go:181] (0xc0029d0000) (0xc003e22000) Create stream I1005 17:21:56.522474 7 log.go:181] (0xc0029d0000) (0xc003e22000) Stream added, broadcasting: 3 I1005 17:21:56.524978 7 log.go:181] (0xc0029d0000) Reply frame received for 3 I1005 17:21:56.525095 7 log.go:181] (0xc0029d0000) (0xc00236eaa0) Create stream I1005 17:21:56.525107 7 log.go:181] (0xc0029d0000) (0xc00236eaa0) Stream added, broadcasting: 5 I1005 17:21:56.526049 7 log.go:181] (0xc0029d0000) Reply frame received for 5 I1005 17:21:56.601711 7 log.go:181] (0xc0029d0000) Data frame received for 3 I1005 17:21:56.601741 7 log.go:181] (0xc003e22000) (3) Data frame handling I1005 17:21:56.601755 7 log.go:181] (0xc003e22000) (3) Data frame sent I1005 17:21:56.601862 7 log.go:181] (0xc0029d0000) Data frame received for 3 I1005 17:21:56.601883 7 log.go:181] (0xc003e22000) (3) Data frame handling I1005 17:21:56.602058 7 log.go:181] (0xc0029d0000) Data frame received for 5 I1005 17:21:56.602069 7 log.go:181] (0xc00236eaa0) (5) Data frame handling I1005 17:21:56.604176 7 log.go:181] (0xc0029d0000) Data frame received for 1 I1005 17:21:56.604192 7 log.go:181] (0xc00236ea00) (1) Data frame handling I1005 17:21:56.604202 7 log.go:181] (0xc00236ea00) (1) Data frame sent I1005 17:21:56.604222 7 log.go:181] (0xc0029d0000) (0xc00236ea00) Stream removed, broadcasting: 1 I1005 17:21:56.604238 7 log.go:181] (0xc0029d0000) Go away received I1005 17:21:56.604351 7 log.go:181] (0xc0029d0000) (0xc00236ea00) Stream removed, broadcasting: 1 I1005 17:21:56.604384 7 log.go:181] (0xc0029d0000) (0xc003e22000) Stream removed, broadcasting: 3 I1005 17:21:56.604396 7 log.go:181] (0xc0029d0000) (0xc00236eaa0) Stream removed, broadcasting: 5 Oct 5 17:21:56.604: INFO: Waiting for responses: map[] Oct 5 17:21:56.616: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.228:8080/dial?request=hostname&protocol=udp&host=10.244.2.213&port=8081&tries=1'] Namespace:pod-network-test-856 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:21:56.616: INFO: >>> kubeConfig: /root/.kube/config I1005 17:21:56.650551 7 log.go:181] (0xc0029d08f0) (0xc00236f180) Create stream I1005 17:21:56.650586 7 log.go:181] (0xc0029d08f0) (0xc00236f180) Stream added, broadcasting: 1 I1005 17:21:56.653031 7 log.go:181] (0xc0029d08f0) Reply frame received for 1 I1005 17:21:56.653074 7 log.go:181] (0xc0029d08f0) (0xc007450140) Create stream I1005 17:21:56.653088 7 log.go:181] (0xc0029d08f0) (0xc007450140) Stream added, broadcasting: 3 I1005 17:21:56.653873 7 log.go:181] (0xc0029d08f0) Reply frame received for 3 I1005 17:21:56.653929 7 log.go:181] (0xc0029d08f0) (0xc0013f4820) Create stream I1005 17:21:56.653957 7 log.go:181] (0xc0029d08f0) (0xc0013f4820) Stream added, broadcasting: 5 I1005 17:21:56.655026 7 log.go:181] (0xc0029d08f0) Reply frame received for 5 I1005 17:21:56.729220 7 log.go:181] (0xc0029d08f0) Data frame received for 3 I1005 17:21:56.729253 7 log.go:181] (0xc007450140) (3) Data frame handling I1005 17:21:56.729272 7 log.go:181] (0xc007450140) (3) Data frame sent I1005 17:21:56.729593 7 log.go:181] (0xc0029d08f0) Data frame received for 5 I1005 17:21:56.729611 7 log.go:181] (0xc0013f4820) (5) Data frame handling I1005 17:21:56.729907 7 log.go:181] (0xc0029d08f0) Data frame received for 3 I1005 17:21:56.729923 7 log.go:181] (0xc007450140) (3) Data frame handling I1005 17:21:56.731840 7 log.go:181] (0xc0029d08f0) Data frame received for 1 I1005 17:21:56.731862 7 log.go:181] (0xc00236f180) (1) Data frame handling I1005 17:21:56.731876 7 log.go:181] (0xc00236f180) (1) Data frame sent I1005 17:21:56.731892 7 log.go:181] (0xc0029d08f0) (0xc00236f180) Stream removed, broadcasting: 1 I1005 17:21:56.731910 7 log.go:181] (0xc0029d08f0) Go away received I1005 17:21:56.732140 7 log.go:181] (0xc0029d08f0) (0xc00236f180) Stream removed, broadcasting: 1 I1005 17:21:56.732206 7 log.go:181] (0xc0029d08f0) (0xc007450140) Stream removed, broadcasting: 3 I1005 17:21:56.732230 7 log.go:181] (0xc0029d08f0) (0xc0013f4820) Stream removed, broadcasting: 5 Oct 5 17:21:56.732: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:21:56.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-856" for this suite. • [SLOW TEST:28.505 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:21:56.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 5 17:21:57.523: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 5 17:21:59.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:22:01.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515317, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:22:04.635: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:22:04.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:22:05.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-193" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.240 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":120,"skipped":2039,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:22:05.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4cf2d6f4-efd1-4b8f-a81b-6a3a1ee3b2cb STEP: Creating a pod to test consume secrets Oct 5 17:22:06.071: INFO: Waiting up to 5m0s for pod "pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09" in namespace "secrets-6012" to be "Succeeded or Failed" Oct 5 17:22:06.074: INFO: Pod "pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296347ms Oct 5 17:22:08.138: INFO: Pod "pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066729683s Oct 5 17:22:10.276: INFO: Pod "pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204748698s STEP: Saw pod success Oct 5 17:22:10.276: INFO: Pod "pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09" satisfied condition "Succeeded or Failed" Oct 5 17:22:10.279: INFO: Trying to get logs from node latest-worker pod pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09 container secret-env-test: STEP: delete the pod Oct 5 17:22:10.345: INFO: Waiting for pod pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09 to disappear Oct 5 17:22:10.419: INFO: Pod pod-secrets-5e28356e-c54c-47eb-9d58-ab1bab513f09 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:22:10.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6012" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":2060,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:22:10.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 17:22:10.479: INFO: Waiting up to 5m0s for pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e" in namespace "emptydir-522" to be "Succeeded or Failed" Oct 5 17:22:10.482: INFO: Pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.372533ms Oct 5 17:22:12.507: INFO: Pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028087906s Oct 5 17:22:14.511: INFO: Pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032227557s Oct 5 17:22:16.515: INFO: Pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036691941s STEP: Saw pod success Oct 5 17:22:16.515: INFO: Pod "pod-0ecfc103-4777-4c22-a5f1-b7323742d48e" satisfied condition "Succeeded or Failed" Oct 5 17:22:16.518: INFO: Trying to get logs from node latest-worker pod pod-0ecfc103-4777-4c22-a5f1-b7323742d48e container test-container: STEP: delete the pod Oct 5 17:22:16.595: INFO: Waiting for pod pod-0ecfc103-4777-4c22-a5f1-b7323742d48e to disappear Oct 5 17:22:16.602: INFO: Pod pod-0ecfc103-4777-4c22-a5f1-b7323742d48e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:22:16.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-522" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2071,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:22:16.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:22:50.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3757" for this suite. • [SLOW TEST:34.105 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2081,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:22:50.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 17:22:50.762: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 17:22:50.770: INFO: Waiting for terminating namespaces to be deleted... Oct 5 17:22:50.773: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 5 17:22:50.778: INFO: kindnet-9tmlz from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:22:50.778: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 17:22:50.778: INFO: kube-proxy-fk9hq from kube-system started at 2020-09-23 08:30:39 +0000 UTC (1 container statuses recorded) Oct 5 17:22:50.778: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 17:22:50.778: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 5 17:22:50.783: INFO: kindnet-z6tnh from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:22:50.783: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 17:22:50.783: INFO: kube-proxy-whjz5 from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:22:50.783: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-29c53106-8a8d-4ab9-931b-e142ca6af7da 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-29c53106-8a8d-4ab9-931b-e142ca6af7da off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-29c53106-8a8d-4ab9-931b-e142ca6af7da [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:23:07.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7021" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.114 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":124,"skipped":2116,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:23:07.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-40177800-b7f0-441a-86a2-88238ed89623 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:23:07.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8270" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":125,"skipped":2117,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:23:07.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 5 17:23:08.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7368' Oct 5 17:23:08.441: INFO: stderr: "" Oct 5 17:23:08.441: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 17:23:09.445: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:09.445: INFO: Found 0 / 1 Oct 5 17:23:10.498: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:10.498: INFO: Found 0 / 1 Oct 5 17:23:11.446: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:11.446: INFO: Found 0 / 1 Oct 5 17:23:12.445: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:12.445: INFO: Found 1 / 1 Oct 5 17:23:12.446: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 5 17:23:12.449: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:12.449: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 17:23:12.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config patch pod agnhost-primary-np2cv --namespace=kubectl-7368 -p {"metadata":{"annotations":{"x":"y"}}}' Oct 5 17:23:12.564: INFO: stderr: "" Oct 5 17:23:12.564: INFO: stdout: "pod/agnhost-primary-np2cv patched\n" STEP: checking annotations Oct 5 17:23:12.577: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:23:12.577: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:23:12.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7368" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":126,"skipped":2122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:23:12.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-855 STEP: creating service affinity-clusterip-transition in namespace services-855 STEP: creating replication controller affinity-clusterip-transition in namespace services-855 I1005 17:23:12.713307 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-855, replica count: 3 I1005 17:23:15.763785 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:23:18.764045 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:23:21.764295 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:23:21.770: INFO: Creating new exec pod Oct 5 17:23:26.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-855 execpod-affinity9pkrx -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Oct 5 17:23:27.045: INFO: stderr: "I1005 17:23:26.926724 1350 log.go:181] (0xc00003a420) (0xc000d8a000) Create stream\nI1005 17:23:26.926786 1350 log.go:181] (0xc00003a420) (0xc000d8a000) Stream added, broadcasting: 1\nI1005 17:23:26.931121 1350 log.go:181] (0xc00003a420) Reply frame received for 1\nI1005 17:23:26.931178 1350 log.go:181] (0xc00003a420) (0xc000160140) Create stream\nI1005 17:23:26.931193 1350 log.go:181] (0xc00003a420) (0xc000160140) Stream added, broadcasting: 3\nI1005 17:23:26.932675 1350 log.go:181] (0xc00003a420) Reply frame received for 3\nI1005 17:23:26.932705 1350 log.go:181] (0xc00003a420) (0xc00088a460) Create stream\nI1005 17:23:26.932716 1350 log.go:181] (0xc00003a420) (0xc00088a460) Stream added, broadcasting: 5\nI1005 17:23:26.933784 1350 log.go:181] (0xc00003a420) Reply frame received for 5\nI1005 17:23:27.035743 1350 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:23:27.035775 1350 log.go:181] (0xc00088a460) (5) Data frame handling\nI1005 17:23:27.035797 1350 log.go:181] (0xc00088a460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI1005 17:23:27.036717 1350 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:23:27.036740 1350 log.go:181] (0xc00088a460) (5) Data frame handling\nI1005 17:23:27.036767 1350 log.go:181] (0xc00088a460) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1005 17:23:27.036796 1350 log.go:181] (0xc00003a420) Data frame received for 3\nI1005 17:23:27.036817 1350 log.go:181] (0xc000160140) (3) Data frame handling\nI1005 17:23:27.037656 1350 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:23:27.037681 1350 log.go:181] (0xc00088a460) (5) Data frame handling\nI1005 17:23:27.038743 1350 log.go:181] (0xc00003a420) Data frame received for 1\nI1005 17:23:27.038788 1350 log.go:181] (0xc000d8a000) (1) Data frame handling\nI1005 17:23:27.038822 1350 log.go:181] (0xc000d8a000) (1) Data frame sent\nI1005 17:23:27.038852 1350 log.go:181] (0xc00003a420) (0xc000d8a000) Stream removed, broadcasting: 1\nI1005 17:23:27.038873 1350 log.go:181] (0xc00003a420) Go away received\nI1005 17:23:27.039442 1350 log.go:181] (0xc00003a420) (0xc000d8a000) Stream removed, broadcasting: 1\nI1005 17:23:27.039469 1350 log.go:181] (0xc00003a420) (0xc000160140) Stream removed, broadcasting: 3\nI1005 17:23:27.039481 1350 log.go:181] (0xc00003a420) (0xc00088a460) Stream removed, broadcasting: 5\n" Oct 5 17:23:27.046: INFO: stdout: "" Oct 5 17:23:27.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-855 execpod-affinity9pkrx -- /bin/sh -x -c nc -zv -t -w 2 10.104.50.141 80' Oct 5 17:23:27.277: INFO: stderr: "I1005 17:23:27.189412 1369 log.go:181] (0xc000b5a0b0) (0xc000c16be0) Create stream\nI1005 17:23:27.189475 1369 log.go:181] (0xc000b5a0b0) (0xc000c16be0) Stream added, broadcasting: 1\nI1005 17:23:27.194283 1369 log.go:181] (0xc000b5a0b0) Reply frame received for 1\nI1005 17:23:27.194343 1369 log.go:181] (0xc000b5a0b0) (0xc0001a3c20) Create stream\nI1005 17:23:27.194382 1369 log.go:181] (0xc000b5a0b0) (0xc0001a3c20) Stream added, broadcasting: 3\nI1005 17:23:27.195396 1369 log.go:181] (0xc000b5a0b0) Reply frame received for 3\nI1005 17:23:27.195429 1369 log.go:181] (0xc000b5a0b0) (0xc0003cafa0) Create stream\nI1005 17:23:27.195439 1369 log.go:181] (0xc000b5a0b0) (0xc0003cafa0) Stream added, broadcasting: 5\nI1005 17:23:27.196310 1369 log.go:181] (0xc000b5a0b0) Reply frame received for 5\nI1005 17:23:27.269933 1369 log.go:181] (0xc000b5a0b0) Data frame received for 3\nI1005 17:23:27.269974 1369 log.go:181] (0xc0001a3c20) (3) Data frame handling\nI1005 17:23:27.270022 1369 log.go:181] (0xc000b5a0b0) Data frame received for 5\nI1005 17:23:27.270081 1369 log.go:181] (0xc0003cafa0) (5) Data frame handling\nI1005 17:23:27.270110 1369 log.go:181] (0xc0003cafa0) (5) Data frame sent\nI1005 17:23:27.270141 1369 log.go:181] (0xc000b5a0b0) Data frame received for 5\nI1005 17:23:27.270154 1369 log.go:181] (0xc0003cafa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.50.141 80\nConnection to 10.104.50.141 80 port [tcp/http] succeeded!\nI1005 17:23:27.271530 1369 log.go:181] (0xc000b5a0b0) Data frame received for 1\nI1005 17:23:27.271553 1369 log.go:181] (0xc000c16be0) (1) Data frame handling\nI1005 17:23:27.271573 1369 log.go:181] (0xc000c16be0) (1) Data frame sent\nI1005 17:23:27.271592 1369 log.go:181] (0xc000b5a0b0) (0xc000c16be0) Stream removed, broadcasting: 1\nI1005 17:23:27.271610 1369 log.go:181] (0xc000b5a0b0) Go away received\nI1005 17:23:27.272137 1369 log.go:181] (0xc000b5a0b0) (0xc000c16be0) Stream removed, broadcasting: 1\nI1005 17:23:27.272166 1369 log.go:181] (0xc000b5a0b0) (0xc0001a3c20) Stream removed, broadcasting: 3\nI1005 17:23:27.272178 1369 log.go:181] (0xc000b5a0b0) (0xc0003cafa0) Stream removed, broadcasting: 5\n" Oct 5 17:23:27.278: INFO: stdout: "" Oct 5 17:23:27.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-855 execpod-affinity9pkrx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.50.141:80/ ; done' Oct 5 17:23:27.602: INFO: stderr: "I1005 17:23:27.443267 1387 log.go:181] (0xc00003a0b0) (0xc0001e6a00) Create stream\nI1005 17:23:27.443327 1387 log.go:181] (0xc00003a0b0) (0xc0001e6a00) Stream added, broadcasting: 1\nI1005 17:23:27.445656 1387 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1005 17:23:27.445716 1387 log.go:181] (0xc00003a0b0) (0xc0001e7680) Create stream\nI1005 17:23:27.445732 1387 log.go:181] (0xc00003a0b0) (0xc0001e7680) Stream added, broadcasting: 3\nI1005 17:23:27.446778 1387 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1005 17:23:27.446812 1387 log.go:181] (0xc00003a0b0) (0xc0009c2640) Create stream\nI1005 17:23:27.446822 1387 log.go:181] (0xc00003a0b0) (0xc0009c2640) Stream added, broadcasting: 5\nI1005 17:23:27.447886 1387 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1005 17:23:27.505058 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.505084 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.505109 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.505145 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.505162 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.505188 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\nI1005 17:23:27.511490 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.511519 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.511535 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.512152 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.512194 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.512212 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.512232 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.512258 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.512281 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.516328 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.516342 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.516349 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.516962 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.516973 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.516978 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.516993 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.517014 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.517035 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.522179 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.522211 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.522243 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.525356 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.525386 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.525401 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.525409 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.525421 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.525427 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.526404 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.526432 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.526454 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.526784 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.526805 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.526870 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.526890 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.526903 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.526915 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.531710 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.531727 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.531741 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.532272 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.532294 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.532304 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.532319 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.532326 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.532335 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.538237 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.538263 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.538288 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.538795 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.538811 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.538819 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.538865 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.538878 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.538884 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.543119 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.543137 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.543146 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.543880 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.543899 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.543907 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.543921 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.543927 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.543933 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.549089 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.549112 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.549129 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.549818 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.549839 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.549860 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.549867 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.549877 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.549882 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.555166 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.555190 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.555204 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.556049 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.556094 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.556115 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.556143 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.556159 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.556184 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\nI1005 17:23:27.556217 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.556234 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.556268 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\nI1005 17:23:27.560002 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.560025 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.560045 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.560515 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.560539 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.560550 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.560566 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.560575 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.560582 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.564388 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.564408 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.564434 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.565376 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.565388 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.565394 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.565421 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.565450 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.565469 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.571136 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.571152 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.571165 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.571690 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.571725 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.571742 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.571761 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.571770 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.571785 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.575412 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.575426 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.575434 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.575924 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.575950 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.575965 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.575982 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.575992 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.576002 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.582819 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.582846 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.582870 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.583457 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.583479 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.583490 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.583510 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.583518 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.583526 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.587609 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.587632 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.587654 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.588056 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.588077 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.588086 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.588100 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.588121 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.588139 1387 log.go:181] (0xc0009c2640) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.594210 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.594245 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.594271 1387 log.go:181] (0xc0001e7680) (3) Data frame sent\nI1005 17:23:27.594921 1387 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 17:23:27.594944 1387 log.go:181] (0xc0009c2640) (5) Data frame handling\nI1005 17:23:27.594968 1387 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 17:23:27.594979 1387 log.go:181] (0xc0001e7680) (3) Data frame handling\nI1005 17:23:27.596498 1387 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1005 17:23:27.596519 1387 log.go:181] (0xc0001e6a00) (1) Data frame handling\nI1005 17:23:27.596533 1387 log.go:181] (0xc0001e6a00) (1) Data frame sent\nI1005 17:23:27.596548 1387 log.go:181] (0xc00003a0b0) (0xc0001e6a00) Stream removed, broadcasting: 1\nI1005 17:23:27.596571 1387 log.go:181] (0xc00003a0b0) Go away received\nI1005 17:23:27.597130 1387 log.go:181] (0xc00003a0b0) (0xc0001e6a00) Stream removed, broadcasting: 1\nI1005 17:23:27.597153 1387 log.go:181] (0xc00003a0b0) (0xc0001e7680) Stream removed, broadcasting: 3\nI1005 17:23:27.597163 1387 log.go:181] (0xc00003a0b0) (0xc0009c2640) Stream removed, broadcasting: 5\n" Oct 5 17:23:27.602: INFO: stdout: "\naffinity-clusterip-transition-rxr6b\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-zsn7s\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-rxr6b\naffinity-clusterip-transition-rxr6b\naffinity-clusterip-transition-zsn7s\naffinity-clusterip-transition-zsn7s\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2" Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-rxr6b Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-zsn7s Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-rxr6b Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-rxr6b Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-zsn7s Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-zsn7s Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.602: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-855 execpod-affinity9pkrx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.50.141:80/ ; done' Oct 5 17:23:27.934: INFO: stderr: "I1005 17:23:27.756117 1405 log.go:181] (0xc000cb9340) (0xc000d84500) Create stream\nI1005 17:23:27.756175 1405 log.go:181] (0xc000cb9340) (0xc000d84500) Stream added, broadcasting: 1\nI1005 17:23:27.761337 1405 log.go:181] (0xc000cb9340) Reply frame received for 1\nI1005 17:23:27.761403 1405 log.go:181] (0xc000cb9340) (0xc000d08000) Create stream\nI1005 17:23:27.761424 1405 log.go:181] (0xc000cb9340) (0xc000d08000) Stream added, broadcasting: 3\nI1005 17:23:27.762166 1405 log.go:181] (0xc000cb9340) Reply frame received for 3\nI1005 17:23:27.762201 1405 log.go:181] (0xc000cb9340) (0xc000b7e000) Create stream\nI1005 17:23:27.762214 1405 log.go:181] (0xc000cb9340) (0xc000b7e000) Stream added, broadcasting: 5\nI1005 17:23:27.762855 1405 log.go:181] (0xc000cb9340) Reply frame received for 5\nI1005 17:23:27.822447 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.822484 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.822495 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.822511 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.822525 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.822536 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.828130 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.828156 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.828182 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.828817 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.828906 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.828922 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.828934 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.828943 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.828954 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.833065 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.833087 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.833105 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.833524 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.833552 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.833563 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.833579 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.833589 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.833596 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.837724 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.837743 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.837758 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.838493 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.838520 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.838538 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.838558 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.838571 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.838581 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.844963 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.844988 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.845014 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.845951 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.845972 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.845983 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.846004 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.846015 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.846024 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.850023 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.850050 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.850072 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.850771 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.850801 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.850827 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.850848 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.850869 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.850881 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.856579 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.856592 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.856599 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.857407 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.857429 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.857444 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.857465 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.857476 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.857492 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1005 17:23:27.865090 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.865106 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.865118 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.866189 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.866221 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.866232 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.866244 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.866269 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.866282 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.872230 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.872253 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.872273 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.873164 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.873193 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.873207 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.873225 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.873234 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.873244 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.878523 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.878552 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.878584 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.878866 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.878887 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.878905 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.879071 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.879094 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.879113 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.886418 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.886439 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.886451 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.887423 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.887454 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.887471 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.887499 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.887539 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.887598 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.894285 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.894322 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.894364 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.895344 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.895360 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.895368 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.895381 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.895385 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.895391 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.898640 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.898660 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.898678 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.899234 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.899282 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.899320 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1005 17:23:27.899357 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.899367 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.899375 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n 2 http://10.104.50.141:80/\nI1005 17:23:27.899484 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.899506 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.899525 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.906427 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.906445 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.906450 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.907123 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.907150 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.907160 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.907173 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.907179 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.907186 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.911191 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.911217 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.911232 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.911718 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.911734 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.911750 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1005 17:23:27.911758 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.911764 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.50.141:80/\nI1005 17:23:27.911781 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1005 17:23:27.911817 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.911868 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.911889 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.918447 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.918464 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.918474 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.918825 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.918843 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.918851 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1005 17:23:27.918862 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.918878 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1005 17:23:27.918892 1405 log.go:181] (0xc000cb9340) Data frame received for 3\n http://10.104.50.141:80/\nI1005 17:23:27.918904 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.918920 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.918942 1405 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1005 17:23:27.925823 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.925843 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.925857 1405 log.go:181] (0xc000d08000) (3) Data frame sent\nI1005 17:23:27.926644 1405 log.go:181] (0xc000cb9340) Data frame received for 5\nI1005 17:23:27.926664 1405 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1005 17:23:27.926785 1405 log.go:181] (0xc000cb9340) Data frame received for 3\nI1005 17:23:27.926797 1405 log.go:181] (0xc000d08000) (3) Data frame handling\nI1005 17:23:27.928293 1405 log.go:181] (0xc000cb9340) Data frame received for 1\nI1005 17:23:27.928317 1405 log.go:181] (0xc000d84500) (1) Data frame handling\nI1005 17:23:27.928340 1405 log.go:181] (0xc000d84500) (1) Data frame sent\nI1005 17:23:27.928363 1405 log.go:181] (0xc000cb9340) (0xc000d84500) Stream removed, broadcasting: 1\nI1005 17:23:27.928538 1405 log.go:181] (0xc000cb9340) Go away received\nI1005 17:23:27.929022 1405 log.go:181] (0xc000cb9340) (0xc000d84500) Stream removed, broadcasting: 1\nI1005 17:23:27.929042 1405 log.go:181] (0xc000cb9340) (0xc000d08000) Stream removed, broadcasting: 3\nI1005 17:23:27.929052 1405 log.go:181] (0xc000cb9340) (0xc000b7e000) Stream removed, broadcasting: 5\n" Oct 5 17:23:27.934: INFO: stdout: "\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2\naffinity-clusterip-transition-hpvd2" Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Received response from host: affinity-clusterip-transition-hpvd2 Oct 5 17:23:27.934: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-855, will wait for the garbage collector to delete the pods Oct 5 17:23:28.035: INFO: Deleting ReplicationController affinity-clusterip-transition took: 7.940399ms Oct 5 17:23:28.435: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.331016ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:23:40.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-855" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:27.450 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":127,"skipped":2161,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:23:40.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:23:51.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8700" for this suite. • [SLOW TEST:11.144 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":128,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:23:51.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3589 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3589 I1005 17:23:51.427280 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3589, replica count: 2 I1005 17:23:54.477712 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:23:57.478002 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:23:57.478: INFO: Creating new exec pod Oct 5 17:24:02.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodpnn56 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 5 17:24:02.742: INFO: stderr: "I1005 17:24:02.634295 1422 log.go:181] (0xc00003a420) (0xc000abc000) Create stream\nI1005 17:24:02.634354 1422 log.go:181] (0xc00003a420) (0xc000abc000) Stream added, broadcasting: 1\nI1005 17:24:02.635883 1422 log.go:181] (0xc00003a420) Reply frame received for 1\nI1005 17:24:02.635911 1422 log.go:181] (0xc00003a420) (0xc0009dc280) Create stream\nI1005 17:24:02.635918 1422 log.go:181] (0xc00003a420) (0xc0009dc280) Stream added, broadcasting: 3\nI1005 17:24:02.636701 1422 log.go:181] (0xc00003a420) Reply frame received for 3\nI1005 17:24:02.636745 1422 log.go:181] (0xc00003a420) (0xc000923ea0) Create stream\nI1005 17:24:02.636759 1422 log.go:181] (0xc00003a420) (0xc000923ea0) Stream added, broadcasting: 5\nI1005 17:24:02.637471 1422 log.go:181] (0xc00003a420) Reply frame received for 5\nI1005 17:24:02.733591 1422 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:24:02.733623 1422 log.go:181] (0xc000923ea0) (5) Data frame handling\nI1005 17:24:02.733669 1422 log.go:181] (0xc000923ea0) (5) Data frame sent\nI1005 17:24:02.733687 1422 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:24:02.733698 1422 log.go:181] (0xc000923ea0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1005 17:24:02.733717 1422 log.go:181] (0xc000923ea0) (5) Data frame sent\nI1005 17:24:02.733884 1422 log.go:181] (0xc00003a420) Data frame received for 5\nI1005 17:24:02.733918 1422 log.go:181] (0xc000923ea0) (5) Data frame handling\nI1005 17:24:02.734215 1422 log.go:181] (0xc00003a420) Data frame received for 3\nI1005 17:24:02.734231 1422 log.go:181] (0xc0009dc280) (3) Data frame handling\nI1005 17:24:02.735997 1422 log.go:181] (0xc00003a420) Data frame received for 1\nI1005 17:24:02.736027 1422 log.go:181] (0xc000abc000) (1) Data frame handling\nI1005 17:24:02.736047 1422 log.go:181] (0xc000abc000) (1) Data frame sent\nI1005 17:24:02.736067 1422 log.go:181] (0xc00003a420) (0xc000abc000) Stream removed, broadcasting: 1\nI1005 17:24:02.736096 1422 log.go:181] (0xc00003a420) Go away received\nI1005 17:24:02.736457 1422 log.go:181] (0xc00003a420) (0xc000abc000) Stream removed, broadcasting: 1\nI1005 17:24:02.736478 1422 log.go:181] (0xc00003a420) (0xc0009dc280) Stream removed, broadcasting: 3\nI1005 17:24:02.736487 1422 log.go:181] (0xc00003a420) (0xc000923ea0) Stream removed, broadcasting: 5\n" Oct 5 17:24:02.742: INFO: stdout: "" Oct 5 17:24:02.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-3589 execpodpnn56 -- /bin/sh -x -c nc -zv -t -w 2 10.99.233.148 80' Oct 5 17:24:02.974: INFO: stderr: "I1005 17:24:02.882676 1440 log.go:181] (0xc00015f290) (0xc000156820) Create stream\nI1005 17:24:02.882729 1440 log.go:181] (0xc00015f290) (0xc000156820) Stream added, broadcasting: 1\nI1005 17:24:02.885721 1440 log.go:181] (0xc00015f290) Reply frame received for 1\nI1005 17:24:02.885746 1440 log.go:181] (0xc00015f290) (0xc0006ce320) Create stream\nI1005 17:24:02.885754 1440 log.go:181] (0xc00015f290) (0xc0006ce320) Stream added, broadcasting: 3\nI1005 17:24:02.886710 1440 log.go:181] (0xc00015f290) Reply frame received for 3\nI1005 17:24:02.886758 1440 log.go:181] (0xc00015f290) (0xc0009363c0) Create stream\nI1005 17:24:02.886771 1440 log.go:181] (0xc00015f290) (0xc0009363c0) Stream added, broadcasting: 5\nI1005 17:24:02.887753 1440 log.go:181] (0xc00015f290) Reply frame received for 5\nI1005 17:24:02.965320 1440 log.go:181] (0xc00015f290) Data frame received for 5\nI1005 17:24:02.965375 1440 log.go:181] (0xc0009363c0) (5) Data frame handling\nI1005 17:24:02.965398 1440 log.go:181] (0xc0009363c0) (5) Data frame sent\nI1005 17:24:02.965415 1440 log.go:181] (0xc00015f290) Data frame received for 5\nI1005 17:24:02.965433 1440 log.go:181] (0xc0009363c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.233.148 80\nConnection to 10.99.233.148 80 port [tcp/http] succeeded!\nI1005 17:24:02.965490 1440 log.go:181] (0xc00015f290) Data frame received for 3\nI1005 17:24:02.965546 1440 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1005 17:24:02.966833 1440 log.go:181] (0xc00015f290) Data frame received for 1\nI1005 17:24:02.966856 1440 log.go:181] (0xc000156820) (1) Data frame handling\nI1005 17:24:02.966868 1440 log.go:181] (0xc000156820) (1) Data frame sent\nI1005 17:24:02.966881 1440 log.go:181] (0xc00015f290) (0xc000156820) Stream removed, broadcasting: 1\nI1005 17:24:02.966899 1440 log.go:181] (0xc00015f290) Go away received\nI1005 17:24:02.967483 1440 log.go:181] (0xc00015f290) (0xc000156820) Stream removed, broadcasting: 1\nI1005 17:24:02.967521 1440 log.go:181] (0xc00015f290) (0xc0006ce320) Stream removed, broadcasting: 3\nI1005 17:24:02.967541 1440 log.go:181] (0xc00015f290) (0xc0009363c0) Stream removed, broadcasting: 5\n" Oct 5 17:24:02.974: INFO: stdout: "" Oct 5 17:24:02.974: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:02.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3589" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.827 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":129,"skipped":2185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:03.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Oct 5 17:24:03.090: INFO: Waiting up to 5m0s for pod "var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171" in namespace "var-expansion-2199" to be "Succeeded or Failed" Oct 5 17:24:03.107: INFO: Pod "var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171": Phase="Pending", Reason="", readiness=false. Elapsed: 17.156979ms Oct 5 17:24:05.116: INFO: Pod "var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026380485s Oct 5 17:24:07.120: INFO: Pod "var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030120999s STEP: Saw pod success Oct 5 17:24:07.120: INFO: Pod "var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171" satisfied condition "Succeeded or Failed" Oct 5 17:24:07.123: INFO: Trying to get logs from node latest-worker2 pod var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171 container dapi-container: STEP: delete the pod Oct 5 17:24:07.207: INFO: Waiting for pod var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171 to disappear Oct 5 17:24:07.230: INFO: Pod var-expansion-2084876d-316d-458a-962d-2bf9e6fa1171 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2199" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":2213,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:07.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:24:07.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14" in namespace "projected-4211" to be "Succeeded or Failed" Oct 5 17:24:07.320: INFO: Pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304017ms Oct 5 17:24:09.367: INFO: Pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04964963s Oct 5 17:24:11.427: INFO: Pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14": Phase="Running", Reason="", readiness=true. Elapsed: 4.109978476s Oct 5 17:24:13.431: INFO: Pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113933396s STEP: Saw pod success Oct 5 17:24:13.431: INFO: Pod "downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14" satisfied condition "Succeeded or Failed" Oct 5 17:24:13.433: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14 container client-container: STEP: delete the pod Oct 5 17:24:13.474: INFO: Waiting for pod downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14 to disappear Oct 5 17:24:13.486: INFO: Pod downwardapi-volume-fb8aab12-ccce-4026-ba72-4e550fbcfd14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:13.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4211" for this suite. • [SLOW TEST:6.256 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2235,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:13.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 5 17:24:13.558: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 5 17:24:24.359: INFO: >>> kubeConfig: /root/.kube/config Oct 5 17:24:27.306: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:38.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7473" for this suite. • [SLOW TEST:24.655 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":132,"skipped":2236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:38.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:24:38.306: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"34067bdd-4018-4ae4-958d-d35450aacdd0", Controller:(*bool)(0xc004bdc8b2), BlockOwnerDeletion:(*bool)(0xc004bdc8b3)}} Oct 5 17:24:38.366: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d1610851-d432-4b41-9085-b7aef1c5dd69", Controller:(*bool)(0xc004b5ffb2), BlockOwnerDeletion:(*bool)(0xc004b5ffb3)}} Oct 5 17:24:38.393: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7708b8a6-f76c-4977-be86-b460af1307a0", Controller:(*bool)(0xc004bdcaaa), BlockOwnerDeletion:(*bool)(0xc004bdcaab)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:43.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1184" for this suite. • [SLOW TEST:5.320 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":133,"skipped":2291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:43.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 17:24:43.633: INFO: Waiting up to 5m0s for pod "downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a" in namespace "downward-api-8043" to be "Succeeded or Failed" Oct 5 17:24:43.636: INFO: Pod "downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.688048ms Oct 5 17:24:45.775: INFO: Pod "downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142008697s Oct 5 17:24:51.417: INFO: Pod "downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.783964525s STEP: Saw pod success Oct 5 17:24:51.417: INFO: Pod "downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a" satisfied condition "Succeeded or Failed" Oct 5 17:24:51.643: INFO: Trying to get logs from node latest-worker2 pod downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a container dapi-container: STEP: delete the pod Oct 5 17:24:51.795: INFO: Waiting for pod downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a to disappear Oct 5 17:24:51.924: INFO: Pod downward-api-4fed4f1b-aa04-4e44-b33b-a7a66116d80a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:51.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8043" for this suite. • [SLOW TEST:8.461 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":2323,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:51.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:52.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8948" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2336,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:52.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 5 17:24:52.203: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4058 /api/v1/namespaces/watch-4058/configmaps/e2e-watch-test-watch-closed c23a07ec-0db9-4447-8802-a4029ad9b5ee 3403925 0 2020-10-05 17:24:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 17:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:24:52.203: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4058 /api/v1/namespaces/watch-4058/configmaps/e2e-watch-test-watch-closed c23a07ec-0db9-4447-8802-a4029ad9b5ee 3403926 0 2020-10-05 17:24:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 17:24:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 5 17:24:52.248: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4058 /api/v1/namespaces/watch-4058/configmaps/e2e-watch-test-watch-closed c23a07ec-0db9-4447-8802-a4029ad9b5ee 3403929 0 2020-10-05 17:24:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 17:24:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:24:52.248: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4058 /api/v1/namespaces/watch-4058/configmaps/e2e-watch-test-watch-closed c23a07ec-0db9-4447-8802-a4029ad9b5ee 3403931 0 2020-10-05 17:24:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 17:24:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:52.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4058" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":136,"skipped":2341,"failed":0} SS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:52.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 17:24:52.397: INFO: starting watch STEP: patching STEP: updating Oct 5 17:24:52.409: INFO: waiting for watch events with expected annotations Oct 5 17:24:52.409: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:24:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5026" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":137,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:24:52.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 5 17:24:56.595: INFO: &Pod{ObjectMeta:{send-events-ead5ce6e-1897-4002-8517-3d3e2fdae625 events-6184 /api/v1/namespaces/events-6184/pods/send-events-ead5ce6e-1897-4002-8517-3d3e2fdae625 3e9d4dc5-4014-4826-8ee9-683fe772cace 3403973 0 2020-10-05 17:24:52 +0000 UTC map[name:foo time:519622143] map[] [] [] [{e2e.test Update v1 2020-10-05 17:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 17:24:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.227\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x4sh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x4sh5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x4sh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:24:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:24:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:24:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:24:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.227,StartTime:2020-10-05 17:24:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 17:24:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://18baaeaf4b5b0f9c83a3fdb5ede59fb6ba2fcf1b8420369d6060f83204578eda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 5 17:24:58.601: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 5 17:25:00.606: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:25:00.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6184" for this suite. • [SLOW TEST:8.181 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":138,"skipped":2380,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:25:00.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7660 Oct 5 17:25:04.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 5 17:25:04.993: INFO: stderr: "I1005 17:25:04.898620 1458 log.go:181] (0xc00018c370) (0xc00017e000) Create stream\nI1005 17:25:04.898698 1458 log.go:181] (0xc00018c370) (0xc00017e000) Stream added, broadcasting: 1\nI1005 17:25:04.902157 1458 log.go:181] (0xc00018c370) Reply frame received for 1\nI1005 17:25:04.902245 1458 log.go:181] (0xc00018c370) (0xc000902280) Create stream\nI1005 17:25:04.902299 1458 log.go:181] (0xc00018c370) (0xc000902280) Stream added, broadcasting: 3\nI1005 17:25:04.904419 1458 log.go:181] (0xc00018c370) Reply frame received for 3\nI1005 17:25:04.904474 1458 log.go:181] (0xc00018c370) (0xc000be0280) Create stream\nI1005 17:25:04.904491 1458 log.go:181] (0xc00018c370) (0xc000be0280) Stream added, broadcasting: 5\nI1005 17:25:04.907642 1458 log.go:181] (0xc00018c370) Reply frame received for 5\nI1005 17:25:04.977336 1458 log.go:181] (0xc00018c370) Data frame received for 5\nI1005 17:25:04.977367 1458 log.go:181] (0xc000be0280) (5) Data frame handling\nI1005 17:25:04.977389 1458 log.go:181] (0xc000be0280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1005 17:25:04.983606 1458 log.go:181] (0xc00018c370) Data frame received for 3\nI1005 17:25:04.983629 1458 log.go:181] (0xc000902280) (3) Data frame handling\nI1005 17:25:04.983650 1458 log.go:181] (0xc000902280) (3) Data frame sent\nI1005 17:25:04.984634 1458 log.go:181] (0xc00018c370) Data frame received for 5\nI1005 17:25:04.984667 1458 log.go:181] (0xc000be0280) (5) Data frame handling\nI1005 17:25:04.984698 1458 log.go:181] (0xc00018c370) Data frame received for 3\nI1005 17:25:04.984724 1458 log.go:181] (0xc000902280) (3) Data frame handling\nI1005 17:25:04.986827 1458 log.go:181] (0xc00018c370) Data frame received for 1\nI1005 17:25:04.986869 1458 log.go:181] (0xc00017e000) (1) Data frame handling\nI1005 17:25:04.986891 1458 log.go:181] (0xc00017e000) (1) Data frame sent\nI1005 17:25:04.986912 1458 log.go:181] (0xc00018c370) (0xc00017e000) Stream removed, broadcasting: 1\nI1005 17:25:04.987017 1458 log.go:181] (0xc00018c370) Go away received\nI1005 17:25:04.987492 1458 log.go:181] (0xc00018c370) (0xc00017e000) Stream removed, broadcasting: 1\nI1005 17:25:04.987516 1458 log.go:181] (0xc00018c370) (0xc000902280) Stream removed, broadcasting: 3\nI1005 17:25:04.987528 1458 log.go:181] (0xc00018c370) (0xc000be0280) Stream removed, broadcasting: 5\n" Oct 5 17:25:04.993: INFO: stdout: "iptables" Oct 5 17:25:04.993: INFO: proxyMode: iptables Oct 5 17:25:04.998: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:05.029: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:07.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:07.034: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:09.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:09.033: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:11.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:11.033: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:13.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:13.034: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:15.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:15.033: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:17.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:17.034: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:19.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:19.051: INFO: Pod kube-proxy-mode-detector still exists Oct 5 17:25:21.029: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 17:25:21.422: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7660 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7660 I1005 17:25:21.566871 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7660, replica count: 3 I1005 17:25:24.617261 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:25:27.617438 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:25:27.626: INFO: Creating new exec pod Oct 5 17:25:32.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Oct 5 17:25:35.783: INFO: stderr: "I1005 17:25:35.674777 1476 log.go:181] (0xc00003b130) (0xc000a76500) Create stream\nI1005 17:25:35.674840 1476 log.go:181] (0xc00003b130) (0xc000a76500) Stream added, broadcasting: 1\nI1005 17:25:35.677263 1476 log.go:181] (0xc00003b130) Reply frame received for 1\nI1005 17:25:35.677318 1476 log.go:181] (0xc00003b130) (0xc00081c0a0) Create stream\nI1005 17:25:35.677336 1476 log.go:181] (0xc00003b130) (0xc00081c0a0) Stream added, broadcasting: 3\nI1005 17:25:35.678376 1476 log.go:181] (0xc00003b130) Reply frame received for 3\nI1005 17:25:35.678420 1476 log.go:181] (0xc00003b130) (0xc00081c140) Create stream\nI1005 17:25:35.678434 1476 log.go:181] (0xc00003b130) (0xc00081c140) Stream added, broadcasting: 5\nI1005 17:25:35.679416 1476 log.go:181] (0xc00003b130) Reply frame received for 5\nI1005 17:25:35.775080 1476 log.go:181] (0xc00003b130) Data frame received for 5\nI1005 17:25:35.775117 1476 log.go:181] (0xc00081c140) (5) Data frame handling\nI1005 17:25:35.775144 1476 log.go:181] (0xc00081c140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI1005 17:25:35.775573 1476 log.go:181] (0xc00003b130) Data frame received for 5\nI1005 17:25:35.775591 1476 log.go:181] (0xc00081c140) (5) Data frame handling\nI1005 17:25:35.775600 1476 log.go:181] (0xc00081c140) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1005 17:25:35.775978 1476 log.go:181] (0xc00003b130) Data frame received for 5\nI1005 17:25:35.775996 1476 log.go:181] (0xc00081c140) (5) Data frame handling\nI1005 17:25:35.776024 1476 log.go:181] (0xc00003b130) Data frame received for 3\nI1005 17:25:35.776033 1476 log.go:181] (0xc00081c0a0) (3) Data frame handling\nI1005 17:25:35.777948 1476 log.go:181] (0xc00003b130) Data frame received for 1\nI1005 17:25:35.777964 1476 log.go:181] (0xc000a76500) (1) Data frame handling\nI1005 17:25:35.777973 1476 log.go:181] (0xc000a76500) (1) Data frame sent\nI1005 17:25:35.777984 1476 log.go:181] (0xc00003b130) (0xc000a76500) Stream removed, broadcasting: 1\nI1005 17:25:35.778019 1476 log.go:181] (0xc00003b130) Go away received\nI1005 17:25:35.778367 1476 log.go:181] (0xc00003b130) (0xc000a76500) Stream removed, broadcasting: 1\nI1005 17:25:35.778382 1476 log.go:181] (0xc00003b130) (0xc00081c0a0) Stream removed, broadcasting: 3\nI1005 17:25:35.778394 1476 log.go:181] (0xc00003b130) (0xc00081c140) Stream removed, broadcasting: 5\n" Oct 5 17:25:35.784: INFO: stdout: "" Oct 5 17:25:35.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c nc -zv -t -w 2 10.111.162.212 80' Oct 5 17:25:35.990: INFO: stderr: "I1005 17:25:35.906532 1495 log.go:181] (0xc0007a5810) (0xc0008d26e0) Create stream\nI1005 17:25:35.906614 1495 log.go:181] (0xc0007a5810) (0xc0008d26e0) Stream added, broadcasting: 1\nI1005 17:25:35.911491 1495 log.go:181] (0xc0007a5810) Reply frame received for 1\nI1005 17:25:35.911523 1495 log.go:181] (0xc0007a5810) (0xc0008d2000) Create stream\nI1005 17:25:35.911532 1495 log.go:181] (0xc0007a5810) (0xc0008d2000) Stream added, broadcasting: 3\nI1005 17:25:35.912326 1495 log.go:181] (0xc0007a5810) Reply frame received for 3\nI1005 17:25:35.912365 1495 log.go:181] (0xc0007a5810) (0xc0007161e0) Create stream\nI1005 17:25:35.912375 1495 log.go:181] (0xc0007a5810) (0xc0007161e0) Stream added, broadcasting: 5\nI1005 17:25:35.913248 1495 log.go:181] (0xc0007a5810) Reply frame received for 5\nI1005 17:25:35.984592 1495 log.go:181] (0xc0007a5810) Data frame received for 3\nI1005 17:25:35.984624 1495 log.go:181] (0xc0008d2000) (3) Data frame handling\nI1005 17:25:35.984646 1495 log.go:181] (0xc0007a5810) Data frame received for 5\nI1005 17:25:35.984654 1495 log.go:181] (0xc0007161e0) (5) Data frame handling\nI1005 17:25:35.984662 1495 log.go:181] (0xc0007161e0) (5) Data frame sent\nI1005 17:25:35.984674 1495 log.go:181] (0xc0007a5810) Data frame received for 5\nI1005 17:25:35.984685 1495 log.go:181] (0xc0007161e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.162.212 80\nConnection to 10.111.162.212 80 port [tcp/http] succeeded!\nI1005 17:25:35.986049 1495 log.go:181] (0xc0007a5810) Data frame received for 1\nI1005 17:25:35.986086 1495 log.go:181] (0xc0008d26e0) (1) Data frame handling\nI1005 17:25:35.986115 1495 log.go:181] (0xc0008d26e0) (1) Data frame sent\nI1005 17:25:35.986133 1495 log.go:181] (0xc0007a5810) (0xc0008d26e0) Stream removed, broadcasting: 1\nI1005 17:25:35.986157 1495 log.go:181] (0xc0007a5810) Go away received\nI1005 17:25:35.986403 1495 log.go:181] (0xc0007a5810) (0xc0008d26e0) Stream removed, broadcasting: 1\nI1005 17:25:35.986416 1495 log.go:181] (0xc0007a5810) (0xc0008d2000) Stream removed, broadcasting: 3\nI1005 17:25:35.986422 1495 log.go:181] (0xc0007a5810) (0xc0007161e0) Stream removed, broadcasting: 5\n" Oct 5 17:25:35.990: INFO: stdout: "" Oct 5 17:25:35.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31238' Oct 5 17:25:36.188: INFO: stderr: "I1005 17:25:36.113950 1513 log.go:181] (0xc0006d8fd0) (0xc000626500) Create stream\nI1005 17:25:36.114045 1513 log.go:181] (0xc0006d8fd0) (0xc000626500) Stream added, broadcasting: 1\nI1005 17:25:36.118625 1513 log.go:181] (0xc0006d8fd0) Reply frame received for 1\nI1005 17:25:36.118670 1513 log.go:181] (0xc0006d8fd0) (0xc00043ee60) Create stream\nI1005 17:25:36.118681 1513 log.go:181] (0xc0006d8fd0) (0xc00043ee60) Stream added, broadcasting: 3\nI1005 17:25:36.119415 1513 log.go:181] (0xc0006d8fd0) Reply frame received for 3\nI1005 17:25:36.119447 1513 log.go:181] (0xc0006d8fd0) (0xc000626000) Create stream\nI1005 17:25:36.119458 1513 log.go:181] (0xc0006d8fd0) (0xc000626000) Stream added, broadcasting: 5\nI1005 17:25:36.120160 1513 log.go:181] (0xc0006d8fd0) Reply frame received for 5\nI1005 17:25:36.180812 1513 log.go:181] (0xc0006d8fd0) Data frame received for 5\nI1005 17:25:36.181011 1513 log.go:181] (0xc000626000) (5) Data frame handling\nI1005 17:25:36.181035 1513 log.go:181] (0xc000626000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 31238\nConnection to 172.18.0.15 31238 port [tcp/31238] succeeded!\nI1005 17:25:36.181048 1513 log.go:181] (0xc0006d8fd0) Data frame received for 5\nI1005 17:25:36.181099 1513 log.go:181] (0xc000626000) (5) Data frame handling\nI1005 17:25:36.181122 1513 log.go:181] (0xc0006d8fd0) Data frame received for 3\nI1005 17:25:36.181134 1513 log.go:181] (0xc00043ee60) (3) Data frame handling\nI1005 17:25:36.182799 1513 log.go:181] (0xc0006d8fd0) Data frame received for 1\nI1005 17:25:36.182832 1513 log.go:181] (0xc000626500) (1) Data frame handling\nI1005 17:25:36.182876 1513 log.go:181] (0xc000626500) (1) Data frame sent\nI1005 17:25:36.182896 1513 log.go:181] (0xc0006d8fd0) (0xc000626500) Stream removed, broadcasting: 1\nI1005 17:25:36.182912 1513 log.go:181] (0xc0006d8fd0) Go away received\nI1005 17:25:36.183328 1513 log.go:181] (0xc0006d8fd0) (0xc000626500) Stream removed, broadcasting: 1\nI1005 17:25:36.183357 1513 log.go:181] (0xc0006d8fd0) (0xc00043ee60) Stream removed, broadcasting: 3\nI1005 17:25:36.183367 1513 log.go:181] (0xc0006d8fd0) (0xc000626000) Stream removed, broadcasting: 5\n" Oct 5 17:25:36.188: INFO: stdout: "" Oct 5 17:25:36.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31238' Oct 5 17:25:36.412: INFO: stderr: "I1005 17:25:36.329409 1531 log.go:181] (0xc000f42fd0) (0xc00031f680) Create stream\nI1005 17:25:36.329481 1531 log.go:181] (0xc000f42fd0) (0xc00031f680) Stream added, broadcasting: 1\nI1005 17:25:36.334355 1531 log.go:181] (0xc000f42fd0) Reply frame received for 1\nI1005 17:25:36.334390 1531 log.go:181] (0xc000f42fd0) (0xc000b88aa0) Create stream\nI1005 17:25:36.334400 1531 log.go:181] (0xc000f42fd0) (0xc000b88aa0) Stream added, broadcasting: 3\nI1005 17:25:36.335280 1531 log.go:181] (0xc000f42fd0) Reply frame received for 3\nI1005 17:25:36.335330 1531 log.go:181] (0xc000f42fd0) (0xc000b88d20) Create stream\nI1005 17:25:36.335346 1531 log.go:181] (0xc000f42fd0) (0xc000b88d20) Stream added, broadcasting: 5\nI1005 17:25:36.336109 1531 log.go:181] (0xc000f42fd0) Reply frame received for 5\nI1005 17:25:36.404968 1531 log.go:181] (0xc000f42fd0) Data frame received for 3\nI1005 17:25:36.404999 1531 log.go:181] (0xc000b88aa0) (3) Data frame handling\nI1005 17:25:36.405104 1531 log.go:181] (0xc000f42fd0) Data frame received for 5\nI1005 17:25:36.405114 1531 log.go:181] (0xc000b88d20) (5) Data frame handling\nI1005 17:25:36.405123 1531 log.go:181] (0xc000b88d20) (5) Data frame sent\nI1005 17:25:36.405135 1531 log.go:181] (0xc000f42fd0) Data frame received for 5\nI1005 17:25:36.405143 1531 log.go:181] (0xc000b88d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 31238\nConnection to 172.18.0.16 31238 port [tcp/31238] succeeded!\nI1005 17:25:36.407094 1531 log.go:181] (0xc000f42fd0) Data frame received for 1\nI1005 17:25:36.407130 1531 log.go:181] (0xc00031f680) (1) Data frame handling\nI1005 17:25:36.407158 1531 log.go:181] (0xc00031f680) (1) Data frame sent\nI1005 17:25:36.407182 1531 log.go:181] (0xc000f42fd0) (0xc00031f680) Stream removed, broadcasting: 1\nI1005 17:25:36.407200 1531 log.go:181] (0xc000f42fd0) Go away received\nI1005 17:25:36.407503 1531 log.go:181] (0xc000f42fd0) (0xc00031f680) Stream removed, broadcasting: 1\nI1005 17:25:36.407515 1531 log.go:181] (0xc000f42fd0) (0xc000b88aa0) Stream removed, broadcasting: 3\nI1005 17:25:36.407520 1531 log.go:181] (0xc000f42fd0) (0xc000b88d20) Stream removed, broadcasting: 5\n" Oct 5 17:25:36.412: INFO: stdout: "" Oct 5 17:25:36.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31238/ ; done' Oct 5 17:25:36.718: INFO: stderr: "I1005 17:25:36.556410 1549 log.go:181] (0xc0000ff810) (0xc0007c6640) Create stream\nI1005 17:25:36.556463 1549 log.go:181] (0xc0000ff810) (0xc0007c6640) Stream added, broadcasting: 1\nI1005 17:25:36.562943 1549 log.go:181] (0xc0000ff810) Reply frame received for 1\nI1005 17:25:36.562974 1549 log.go:181] (0xc0000ff810) (0xc0007c6000) Create stream\nI1005 17:25:36.562982 1549 log.go:181] (0xc0000ff810) (0xc0007c6000) Stream added, broadcasting: 3\nI1005 17:25:36.563665 1549 log.go:181] (0xc0000ff810) Reply frame received for 3\nI1005 17:25:36.563687 1549 log.go:181] (0xc0000ff810) (0xc000c28000) Create stream\nI1005 17:25:36.563694 1549 log.go:181] (0xc0000ff810) (0xc000c28000) Stream added, broadcasting: 5\nI1005 17:25:36.564329 1549 log.go:181] (0xc0000ff810) Reply frame received for 5\nI1005 17:25:36.629790 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.629824 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.629833 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.629846 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.629854 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.629861 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.632174 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.632195 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.632213 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.632661 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.632735 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.632756 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.632773 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.632782 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.632790 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.637777 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.637820 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.637835 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.638438 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.638460 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.638479 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.638500 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.638518 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.638531 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.643210 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.643239 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.643255 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.643632 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.643648 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.643664 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.643694 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.643710 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.643728 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.648144 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.648168 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.648187 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.648574 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.648607 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.648620 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.648632 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.648639 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.648646 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.652966 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.652981 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.652994 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.653513 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.653549 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.653568 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.653590 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.653605 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.653624 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.659275 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.659301 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.659319 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.660142 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.660168 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.660187 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.660214 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.660228 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.660239 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.665216 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.665237 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.665252 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.665731 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.665745 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.665758 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.665771 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.665780 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.665802 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.670782 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.670797 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.670816 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.671425 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.671438 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.671450 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.671460 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.671468 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.671476 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.675143 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.675177 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.675193 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.675844 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.675862 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.675878 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.675999 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.676014 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.676022 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.679379 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.679394 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.679401 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.679975 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.679998 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.680007 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.680016 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.680021 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.680027 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.684935 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.684953 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.684965 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.685479 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.685495 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.685506 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.685657 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.685675 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.685689 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.689274 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.689291 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.689301 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.689597 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.689624 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.689637 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.689660 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.689675 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.689689 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.693329 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.693351 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.693361 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.693745 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.693764 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.693778 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.693812 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.693828 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.693836 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.698255 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.698272 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.698283 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.698872 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.698889 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.698897 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.698906 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.698913 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.698924 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.704148 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.704182 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.704202 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.704715 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.704738 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.704771 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.704790 1549 log.go:181] (0xc000c28000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.704808 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.704816 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.710247 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.710263 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.710273 1549 log.go:181] (0xc0007c6000) (3) Data frame sent\nI1005 17:25:36.711036 1549 log.go:181] (0xc0000ff810) Data frame received for 5\nI1005 17:25:36.711063 1549 log.go:181] (0xc000c28000) (5) Data frame handling\nI1005 17:25:36.711089 1549 log.go:181] (0xc0000ff810) Data frame received for 3\nI1005 17:25:36.711102 1549 log.go:181] (0xc0007c6000) (3) Data frame handling\nI1005 17:25:36.713168 1549 log.go:181] (0xc0000ff810) Data frame received for 1\nI1005 17:25:36.713198 1549 log.go:181] (0xc0007c6640) (1) Data frame handling\nI1005 17:25:36.713217 1549 log.go:181] (0xc0007c6640) (1) Data frame sent\nI1005 17:25:36.713237 1549 log.go:181] (0xc0000ff810) (0xc0007c6640) Stream removed, broadcasting: 1\nI1005 17:25:36.713271 1549 log.go:181] (0xc0000ff810) Go away received\nI1005 17:25:36.713645 1549 log.go:181] (0xc0000ff810) (0xc0007c6640) Stream removed, broadcasting: 1\nI1005 17:25:36.713663 1549 log.go:181] (0xc0000ff810) (0xc0007c6000) Stream removed, broadcasting: 3\nI1005 17:25:36.713673 1549 log.go:181] (0xc0000ff810) (0xc000c28000) Stream removed, broadcasting: 5\n" Oct 5 17:25:36.719: INFO: stdout: "\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t\naffinity-nodeport-timeout-br96t" Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Received response from host: affinity-nodeport-timeout-br96t Oct 5 17:25:36.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31238/' Oct 5 17:25:36.930: INFO: stderr: "I1005 17:25:36.850580 1566 log.go:181] (0xc0006f6f20) (0xc0005a3f40) Create stream\nI1005 17:25:36.850643 1566 log.go:181] (0xc0006f6f20) (0xc0005a3f40) Stream added, broadcasting: 1\nI1005 17:25:36.854970 1566 log.go:181] (0xc0006f6f20) Reply frame received for 1\nI1005 17:25:36.855017 1566 log.go:181] (0xc0006f6f20) (0xc0004f0280) Create stream\nI1005 17:25:36.855036 1566 log.go:181] (0xc0006f6f20) (0xc0004f0280) Stream added, broadcasting: 3\nI1005 17:25:36.855885 1566 log.go:181] (0xc0006f6f20) Reply frame received for 3\nI1005 17:25:36.855923 1566 log.go:181] (0xc0006f6f20) (0xc0004f0dc0) Create stream\nI1005 17:25:36.855932 1566 log.go:181] (0xc0006f6f20) (0xc0004f0dc0) Stream added, broadcasting: 5\nI1005 17:25:36.856936 1566 log.go:181] (0xc0006f6f20) Reply frame received for 5\nI1005 17:25:36.919922 1566 log.go:181] (0xc0006f6f20) Data frame received for 5\nI1005 17:25:36.919951 1566 log.go:181] (0xc0004f0dc0) (5) Data frame handling\nI1005 17:25:36.919970 1566 log.go:181] (0xc0004f0dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:36.922935 1566 log.go:181] (0xc0006f6f20) Data frame received for 3\nI1005 17:25:36.922968 1566 log.go:181] (0xc0004f0280) (3) Data frame handling\nI1005 17:25:36.922992 1566 log.go:181] (0xc0004f0280) (3) Data frame sent\nI1005 17:25:36.924000 1566 log.go:181] (0xc0006f6f20) Data frame received for 5\nI1005 17:25:36.924033 1566 log.go:181] (0xc0004f0dc0) (5) Data frame handling\nI1005 17:25:36.924187 1566 log.go:181] (0xc0006f6f20) Data frame received for 3\nI1005 17:25:36.924205 1566 log.go:181] (0xc0004f0280) (3) Data frame handling\nI1005 17:25:36.926031 1566 log.go:181] (0xc0006f6f20) Data frame received for 1\nI1005 17:25:36.926052 1566 log.go:181] (0xc0005a3f40) (1) Data frame handling\nI1005 17:25:36.926064 1566 log.go:181] (0xc0005a3f40) (1) Data frame sent\nI1005 17:25:36.926085 1566 log.go:181] (0xc0006f6f20) (0xc0005a3f40) Stream removed, broadcasting: 1\nI1005 17:25:36.926105 1566 log.go:181] (0xc0006f6f20) Go away received\nI1005 17:25:36.926470 1566 log.go:181] (0xc0006f6f20) (0xc0005a3f40) Stream removed, broadcasting: 1\nI1005 17:25:36.926487 1566 log.go:181] (0xc0006f6f20) (0xc0004f0280) Stream removed, broadcasting: 3\nI1005 17:25:36.926494 1566 log.go:181] (0xc0006f6f20) (0xc0004f0dc0) Stream removed, broadcasting: 5\n" Oct 5 17:25:36.930: INFO: stdout: "affinity-nodeport-timeout-br96t" Oct 5 17:25:51.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31238/' Oct 5 17:25:52.170: INFO: stderr: "I1005 17:25:52.073465 1584 log.go:181] (0xc0008a6840) (0xc0007c6280) Create stream\nI1005 17:25:52.073540 1584 log.go:181] (0xc0008a6840) (0xc0007c6280) Stream added, broadcasting: 1\nI1005 17:25:52.075202 1584 log.go:181] (0xc0008a6840) Reply frame received for 1\nI1005 17:25:52.075226 1584 log.go:181] (0xc0008a6840) (0xc0007c6320) Create stream\nI1005 17:25:52.075241 1584 log.go:181] (0xc0008a6840) (0xc0007c6320) Stream added, broadcasting: 3\nI1005 17:25:52.076056 1584 log.go:181] (0xc0008a6840) Reply frame received for 3\nI1005 17:25:52.076103 1584 log.go:181] (0xc0008a6840) (0xc000c3e000) Create stream\nI1005 17:25:52.076117 1584 log.go:181] (0xc0008a6840) (0xc000c3e000) Stream added, broadcasting: 5\nI1005 17:25:52.077116 1584 log.go:181] (0xc0008a6840) Reply frame received for 5\nI1005 17:25:52.159195 1584 log.go:181] (0xc0008a6840) Data frame received for 5\nI1005 17:25:52.159250 1584 log.go:181] (0xc000c3e000) (5) Data frame handling\nI1005 17:25:52.159281 1584 log.go:181] (0xc000c3e000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:25:52.161932 1584 log.go:181] (0xc0008a6840) Data frame received for 3\nI1005 17:25:52.161948 1584 log.go:181] (0xc0007c6320) (3) Data frame handling\nI1005 17:25:52.161955 1584 log.go:181] (0xc0007c6320) (3) Data frame sent\nI1005 17:25:52.162583 1584 log.go:181] (0xc0008a6840) Data frame received for 5\nI1005 17:25:52.162612 1584 log.go:181] (0xc000c3e000) (5) Data frame handling\nI1005 17:25:52.162805 1584 log.go:181] (0xc0008a6840) Data frame received for 3\nI1005 17:25:52.162819 1584 log.go:181] (0xc0007c6320) (3) Data frame handling\nI1005 17:25:52.164641 1584 log.go:181] (0xc0008a6840) Data frame received for 1\nI1005 17:25:52.164671 1584 log.go:181] (0xc0007c6280) (1) Data frame handling\nI1005 17:25:52.164697 1584 log.go:181] (0xc0007c6280) (1) Data frame sent\nI1005 17:25:52.164718 1584 log.go:181] (0xc0008a6840) (0xc0007c6280) Stream removed, broadcasting: 1\nI1005 17:25:52.164740 1584 log.go:181] (0xc0008a6840) Go away received\nI1005 17:25:52.165339 1584 log.go:181] (0xc0008a6840) (0xc0007c6280) Stream removed, broadcasting: 1\nI1005 17:25:52.165357 1584 log.go:181] (0xc0008a6840) (0xc0007c6320) Stream removed, broadcasting: 3\nI1005 17:25:52.165366 1584 log.go:181] (0xc0008a6840) (0xc000c3e000) Stream removed, broadcasting: 5\n" Oct 5 17:25:52.170: INFO: stdout: "affinity-nodeport-timeout-br96t" Oct 5 17:26:07.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-7660 execpod-affinityr46qq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:31238/' Oct 5 17:26:07.662: INFO: stderr: "I1005 17:26:07.536164 1602 log.go:181] (0xc000ac0dc0) (0xc000624960) Create stream\nI1005 17:26:07.536236 1602 log.go:181] (0xc000ac0dc0) (0xc000624960) Stream added, broadcasting: 1\nI1005 17:26:07.541862 1602 log.go:181] (0xc000ac0dc0) Reply frame received for 1\nI1005 17:26:07.541924 1602 log.go:181] (0xc000ac0dc0) (0xc000566000) Create stream\nI1005 17:26:07.541942 1602 log.go:181] (0xc000ac0dc0) (0xc000566000) Stream added, broadcasting: 3\nI1005 17:26:07.542821 1602 log.go:181] (0xc000ac0dc0) Reply frame received for 3\nI1005 17:26:07.542863 1602 log.go:181] (0xc000ac0dc0) (0xc000625040) Create stream\nI1005 17:26:07.542878 1602 log.go:181] (0xc000ac0dc0) (0xc000625040) Stream added, broadcasting: 5\nI1005 17:26:07.543707 1602 log.go:181] (0xc000ac0dc0) Reply frame received for 5\nI1005 17:26:07.647533 1602 log.go:181] (0xc000ac0dc0) Data frame received for 5\nI1005 17:26:07.647580 1602 log.go:181] (0xc000625040) (5) Data frame handling\nI1005 17:26:07.647610 1602 log.go:181] (0xc000625040) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31238/\nI1005 17:26:07.654572 1602 log.go:181] (0xc000ac0dc0) Data frame received for 3\nI1005 17:26:07.654618 1602 log.go:181] (0xc000566000) (3) Data frame handling\nI1005 17:26:07.654648 1602 log.go:181] (0xc000566000) (3) Data frame sent\nI1005 17:26:07.654663 1602 log.go:181] (0xc000ac0dc0) Data frame received for 3\nI1005 17:26:07.654677 1602 log.go:181] (0xc000566000) (3) Data frame handling\nI1005 17:26:07.654722 1602 log.go:181] (0xc000ac0dc0) Data frame received for 5\nI1005 17:26:07.654746 1602 log.go:181] (0xc000625040) (5) Data frame handling\nI1005 17:26:07.656398 1602 log.go:181] (0xc000ac0dc0) Data frame received for 1\nI1005 17:26:07.656420 1602 log.go:181] (0xc000624960) (1) Data frame handling\nI1005 17:26:07.656441 1602 log.go:181] (0xc000624960) (1) Data frame sent\nI1005 17:26:07.656455 1602 log.go:181] (0xc000ac0dc0) (0xc000624960) Stream removed, broadcasting: 1\nI1005 17:26:07.656481 1602 log.go:181] (0xc000ac0dc0) Go away received\nI1005 17:26:07.657242 1602 log.go:181] (0xc000ac0dc0) (0xc000624960) Stream removed, broadcasting: 1\nI1005 17:26:07.657286 1602 log.go:181] (0xc000ac0dc0) (0xc000566000) Stream removed, broadcasting: 3\nI1005 17:26:07.657300 1602 log.go:181] (0xc000ac0dc0) (0xc000625040) Stream removed, broadcasting: 5\n" Oct 5 17:26:07.662: INFO: stdout: "affinity-nodeport-timeout-xqdnw" Oct 5 17:26:07.662: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7660, will wait for the garbage collector to delete the pods Oct 5 17:26:07.754: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 16.396897ms Oct 5 17:26:08.354: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.25197ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:26:20.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7660" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:79.443 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":139,"skipped":2382,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:26:20.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-70087736-e579-4551-ae22-7c0f0c7ecc2a STEP: Creating a pod to test consume secrets Oct 5 17:26:20.219: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a" in namespace "projected-841" to be "Succeeded or Failed" Oct 5 17:26:20.236: INFO: Pod "pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.723793ms Oct 5 17:26:22.249: INFO: Pod "pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029779096s Oct 5 17:26:24.253: INFO: Pod "pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033338954s STEP: Saw pod success Oct 5 17:26:24.253: INFO: Pod "pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a" satisfied condition "Succeeded or Failed" Oct 5 17:26:24.256: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a container projected-secret-volume-test: STEP: delete the pod Oct 5 17:26:24.298: INFO: Waiting for pod pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a to disappear Oct 5 17:26:24.310: INFO: Pod pod-projected-secrets-76cb6dc9-c0d0-42d1-a06c-c9aba985852a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:26:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-841" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2387,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:26:24.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 5 17:26:24.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5716' Oct 5 17:26:24.765: INFO: stderr: "" Oct 5 17:26:24.765: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 17:26:24.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5716' Oct 5 17:26:24.944: INFO: stderr: "" Oct 5 17:26:24.945: INFO: stdout: "update-demo-nautilus-5d4gp update-demo-nautilus-bth5h " Oct 5 17:26:24.945: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d4gp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5716' Oct 5 17:26:25.057: INFO: stderr: "" Oct 5 17:26:25.057: INFO: stdout: "" Oct 5 17:26:25.057: INFO: update-demo-nautilus-5d4gp is created but not running Oct 5 17:26:30.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5716' Oct 5 17:26:30.163: INFO: stderr: "" Oct 5 17:26:30.163: INFO: stdout: "update-demo-nautilus-5d4gp update-demo-nautilus-bth5h " Oct 5 17:26:30.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d4gp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5716' Oct 5 17:26:30.259: INFO: stderr: "" Oct 5 17:26:30.259: INFO: stdout: "true" Oct 5 17:26:30.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d4gp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5716' Oct 5 17:26:30.360: INFO: stderr: "" Oct 5 17:26:30.360: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:26:30.360: INFO: validating pod update-demo-nautilus-5d4gp Oct 5 17:26:30.365: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:26:30.365: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:26:30.365: INFO: update-demo-nautilus-5d4gp is verified up and running Oct 5 17:26:30.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bth5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5716' Oct 5 17:26:30.466: INFO: stderr: "" Oct 5 17:26:30.466: INFO: stdout: "true" Oct 5 17:26:30.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bth5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5716' Oct 5 17:26:30.566: INFO: stderr: "" Oct 5 17:26:30.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:26:30.567: INFO: validating pod update-demo-nautilus-bth5h Oct 5 17:26:30.570: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:26:30.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:26:30.570: INFO: update-demo-nautilus-bth5h is verified up and running STEP: using delete to clean up resources Oct 5 17:26:30.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5716' Oct 5 17:26:30.688: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:26:30.688: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 5 17:26:30.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5716' Oct 5 17:26:30.787: INFO: stderr: "No resources found in kubectl-5716 namespace.\n" Oct 5 17:26:30.787: INFO: stdout: "" Oct 5 17:26:30.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5716 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 17:26:30.884: INFO: stderr: "" Oct 5 17:26:30.884: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:26:30.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5716" for this suite. • [SLOW TEST:6.560 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":141,"skipped":2391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:26:30.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:26:31.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7499' Oct 5 17:26:31.277: INFO: stderr: "" Oct 5 17:26:31.277: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 5 17:26:31.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7499' Oct 5 17:26:31.985: INFO: stderr: "" Oct 5 17:26:31.985: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 17:26:33.093: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:26:33.093: INFO: Found 0 / 1 Oct 5 17:26:33.990: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:26:33.990: INFO: Found 0 / 1 Oct 5 17:26:34.993: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:26:34.993: INFO: Found 1 / 1 Oct 5 17:26:34.993: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 5 17:26:34.996: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:26:34.996: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 17:26:34.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config describe pod agnhost-primary-hw6jp --namespace=kubectl-7499' Oct 5 17:26:35.122: INFO: stderr: "" Oct 5 17:26:35.122: INFO: stdout: "Name: agnhost-primary-hw6jp\nNamespace: kubectl-7499\nPriority: 0\nNode: latest-worker2/172.18.0.16\nStart Time: Mon, 05 Oct 2020 17:26:31 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.231\nIPs:\n IP: 10.244.2.231\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f5970f083da9b8024759425b09f8349a2e190b714c2052cbfc28576970bf8074\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 05 Oct 2020 17:26:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bv98m (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bv98m:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bv98m\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-7499/agnhost-primary-hw6jp to latest-worker2\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 5 17:26:35.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-7499' Oct 5 17:26:35.250: INFO: stderr: "" Oct 5 17:26:35.250: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7499\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-hw6jp\n" Oct 5 17:26:35.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-7499' Oct 5 17:26:35.369: INFO: stderr: "" Oct 5 17:26:35.369: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7499\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.105.126.116\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.231:6379\nSession Affinity: None\nEvents: \n" Oct 5 17:26:35.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config describe node latest-control-plane' Oct 5 17:26:35.535: INFO: stderr: "" Oct 5 17:26:35.535: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:30:10 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 05 Oct 2020 17:26:32 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 05 Oct 2020 17:23:59 +0000 Wed, 23 Sep 2020 08:30:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 05 Oct 2020 17:23:59 +0000 Wed, 23 Sep 2020 08:30:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 05 Oct 2020 17:23:59 +0000 Wed, 23 Sep 2020 08:30:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 05 Oct 2020 17:23:59 +0000 Wed, 23 Sep 2020 08:30:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.14\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 64beec4989ed4d8f8bd5309b1762a577\n System UUID: 7b68d344-27a4-495e-8640-0edbcc5a7172\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-dxfkr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-f9fd979d6-kgmhr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-nfg88 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-c4wjp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-78776bfc44-9j8tz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 5 17:26:35.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config describe namespace kubectl-7499' Oct 5 17:26:35.654: INFO: stderr: "" Oct 5 17:26:35.654: INFO: stdout: "Name: kubectl-7499\nLabels: e2e-framework=kubectl\n e2e-run=e5380171-1611-4037-975e-f9b0a62834a8\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:26:35.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7499" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":142,"skipped":2417,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:26:35.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-f791d854-e0f6-41ac-bce7-9da0262c6835 in namespace container-probe-1107 Oct 5 17:26:39.775: INFO: Started pod liveness-f791d854-e0f6-41ac-bce7-9da0262c6835 in namespace container-probe-1107 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 17:26:39.779: INFO: Initial restart count of pod liveness-f791d854-e0f6-41ac-bce7-9da0262c6835 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:30:40.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1107" for this suite. • [SLOW TEST:244.793 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2421,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:30:40.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:30:40.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config version' Oct 5 17:30:41.000: INFO: stderr: "" Oct 5 17:30:41.001: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.3-rc.0\", GitCommit:\"d60a97015628047ffba1adebed86432370c354bc\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T14:01:27Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:30:41.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8502" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":144,"skipped":2440,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:30:41.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:30:41.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4920" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":145,"skipped":2450,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:30:41.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:30:41.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d" in namespace "projected-2724" to be "Succeeded or Failed" Oct 5 17:30:41.271: INFO: Pod "downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d": Phase="Pending", Reason="", readiness=false. Elapsed: 69.126315ms Oct 5 17:30:43.275: INFO: Pod "downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073845854s Oct 5 17:30:45.281: INFO: Pod "downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079095751s STEP: Saw pod success Oct 5 17:30:45.281: INFO: Pod "downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d" satisfied condition "Succeeded or Failed" Oct 5 17:30:45.283: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d container client-container: STEP: delete the pod Oct 5 17:30:45.337: INFO: Waiting for pod downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d to disappear Oct 5 17:30:45.340: INFO: Pod downwardapi-volume-eb1e6209-5d79-4673-89da-51e0270af63d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:30:45.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2724" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2459,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:30:45.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6431 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 5 17:30:45.429: INFO: Found 0 stateful pods, waiting for 3 Oct 5 17:30:55.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:30:55.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:30:55.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Oct 5 17:31:05.434: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:31:05.434: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:31:05.434: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 5 17:31:05.482: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 5 17:31:15.535: INFO: Updating stateful set ss2 Oct 5 17:31:15.588: INFO: Waiting for Pod statefulset-6431/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 5 17:31:26.587: INFO: Found 2 stateful pods, waiting for 3 Oct 5 17:31:36.593: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:31:36.593: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 17:31:36.593: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 5 17:31:36.622: INFO: Updating stateful set ss2 Oct 5 17:31:36.699: INFO: Waiting for Pod statefulset-6431/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:31:46.709: INFO: Waiting for Pod statefulset-6431/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:31:56.729: INFO: Updating stateful set ss2 Oct 5 17:31:56.788: INFO: Waiting for StatefulSet statefulset-6431/ss2 to complete update Oct 5 17:31:56.788: INFO: Waiting for Pod statefulset-6431/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 17:32:06.798: INFO: Waiting for StatefulSet statefulset-6431/ss2 to complete update Oct 5 17:32:06.798: INFO: Waiting for Pod statefulset-6431/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 17:32:16.797: INFO: Deleting all statefulset in ns statefulset-6431 Oct 5 17:32:16.800: INFO: Scaling statefulset ss2 to 0 Oct 5 17:32:46.823: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 17:32:46.825: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:32:46.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6431" for this suite. • [SLOW TEST:121.528 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":147,"skipped":2465,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:32:46.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Oct 5 17:32:46.941: INFO: Waiting up to 5m0s for pod "var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764" in namespace "var-expansion-4157" to be "Succeeded or Failed" Oct 5 17:32:46.945: INFO: Pod "var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764": Phase="Pending", Reason="", readiness=false. Elapsed: 3.77426ms Oct 5 17:32:49.109: INFO: Pod "var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168107974s Oct 5 17:32:51.114: INFO: Pod "var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172831844s STEP: Saw pod success Oct 5 17:32:51.114: INFO: Pod "var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764" satisfied condition "Succeeded or Failed" Oct 5 17:32:51.117: INFO: Trying to get logs from node latest-worker2 pod var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764 container dapi-container: STEP: delete the pod Oct 5 17:32:51.169: INFO: Waiting for pod var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764 to disappear Oct 5 17:32:51.178: INFO: Pod var-expansion-2fefb958-978c-4fbd-ab45-51fbce02b764 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:32:51.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4157" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2466,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:32:51.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-edd8134d-ef71-457b-884f-0508318a1ad2 STEP: Creating a pod to test consume secrets Oct 5 17:32:51.427: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03" in namespace "projected-9741" to be "Succeeded or Failed" Oct 5 17:32:51.455: INFO: Pod "pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03": Phase="Pending", Reason="", readiness=false. Elapsed: 27.664605ms Oct 5 17:32:53.486: INFO: Pod "pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058945712s Oct 5 17:32:55.490: INFO: Pod "pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062541593s STEP: Saw pod success Oct 5 17:32:55.490: INFO: Pod "pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03" satisfied condition "Succeeded or Failed" Oct 5 17:32:55.493: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03 container projected-secret-volume-test: STEP: delete the pod Oct 5 17:32:55.584: INFO: Waiting for pod pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03 to disappear Oct 5 17:32:55.601: INFO: Pod pod-projected-secrets-9d804c17-f783-4a5c-875a-3003c82dfc03 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:32:55.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9741" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2473,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:32:55.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:32:55.766: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 5 17:32:55.780: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 5 17:33:00.783: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 17:33:00.783: INFO: Creating deployment "test-rolling-update-deployment" Oct 5 17:33:00.788: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 5 17:33:00.795: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 5 17:33:02.803: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 5 17:33:03.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515981, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515981, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515981, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737515980, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:33:05.121: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 17:33:05.131: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2177 /apis/apps/v1/namespaces/deployment-2177/deployments/test-rolling-update-deployment 91813e75-c074-4fff-802d-97de2faec142 3406149 1 2020-10-05 17:33:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-10-05 17:33:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004baa2d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-05 17:33:01 +0000 UTC,LastTransitionTime:2020-10-05 17:33:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-10-05 17:33:04 +0000 UTC,LastTransitionTime:2020-10-05 17:33:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 5 17:33:05.134: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-2177 /apis/apps/v1/namespaces/deployment-2177/replicasets/test-rolling-update-deployment-c4cb8d6d9 2102624c-9957-41a9-9b20-7df50e3389c9 3406138 1 2020-10-05 17:33:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 91813e75-c074-4fff-802d-97de2faec142 0xc004baa800 0xc004baa801}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91813e75-c074-4fff-802d-97de2faec142\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004baa878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:33:05.134: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 5 17:33:05.134: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2177 /apis/apps/v1/namespaces/deployment-2177/replicasets/test-rolling-update-controller d8337faf-92ac-4478-a720-40f1e11bbe34 3406148 2 2020-10-05 17:32:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 91813e75-c074-4fff-802d-97de2faec142 0xc004baa6ef 0xc004baa700}] [] [{e2e.test Update apps/v1 2020-10-05 17:32:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91813e75-c074-4fff-802d-97de2faec142\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004baa798 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:33:05.137: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-rsh5q" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-rsh5q test-rolling-update-deployment-c4cb8d6d9- deployment-2177 /api/v1/namespaces/deployment-2177/pods/test-rolling-update-deployment-c4cb8d6d9-rsh5q 2ebeb41b-4afb-4ea7-a154-03fee792d760 3406137 0 2020-10-05 17:33:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 2102624c-9957-41a9-9b20-7df50e3389c9 0xc004baad30 0xc004baad31}] [] [{kube-controller-manager Update v1 2020-10-05 17:33:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2102624c-9957-41a9-9b20-7df50e3389c9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 17:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sqjdq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sqjdq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sqjdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:33:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:33:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:33:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:33:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.239,StartTime:2020-10-05 17:33:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 17:33:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://3b3fe15e3dd1dff6e6cd1e40e9f47af9deb48f3ccf02f0c4ba3fb7a7fea94b12,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:05.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2177" for this suite. • [SLOW TEST:9.535 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":150,"skipped":2484,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:05.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:33:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 17:33:08.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3829 create -f -' Oct 5 17:33:13.155: INFO: stderr: "" Oct 5 17:33:13.155: INFO: stdout: "e2e-test-crd-publish-openapi-2051-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 5 17:33:13.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3829 delete e2e-test-crd-publish-openapi-2051-crds test-cr' Oct 5 17:33:13.281: INFO: stderr: "" Oct 5 17:33:13.281: INFO: stdout: "e2e-test-crd-publish-openapi-2051-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 5 17:33:13.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3829 apply -f -' Oct 5 17:33:13.575: INFO: stderr: "" Oct 5 17:33:13.575: INFO: stdout: "e2e-test-crd-publish-openapi-2051-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 5 17:33:13.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3829 delete e2e-test-crd-publish-openapi-2051-crds test-cr' Oct 5 17:33:13.699: INFO: stderr: "" Oct 5 17:33:13.699: INFO: stdout: "e2e-test-crd-publish-openapi-2051-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 5 17:33:13.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2051-crds' Oct 5 17:33:13.993: INFO: stderr: "" Oct 5 17:33:13.993: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2051-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:16.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3829" for this suite. • [SLOW TEST:11.843 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":151,"skipped":2489,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:16.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:33:17.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01" in namespace "downward-api-1985" to be "Succeeded or Failed" Oct 5 17:33:17.054: INFO: Pod "downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.665307ms Oct 5 17:33:19.058: INFO: Pod "downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007890097s Oct 5 17:33:21.063: INFO: Pod "downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012708909s STEP: Saw pod success Oct 5 17:33:21.063: INFO: Pod "downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01" satisfied condition "Succeeded or Failed" Oct 5 17:33:21.066: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01 container client-container: STEP: delete the pod Oct 5 17:33:21.101: INFO: Waiting for pod downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01 to disappear Oct 5 17:33:21.104: INFO: Pod downwardapi-volume-a220fdee-b90b-49b8-8008-db73aee5fc01 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:21.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1985" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2505,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:21.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6373.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6373.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6373.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6373.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6373.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 17:33:29.248: INFO: DNS probes using dns-6373/dns-test-38682d99-673e-4cef-9674-c14309c67423 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:29.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6373" for this suite. • [SLOW TEST:8.198 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":153,"skipped":2518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:29.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-b661740d-1e0d-467a-b698-f8b9e472d53f STEP: Creating a pod to test consume secrets Oct 5 17:33:29.741: INFO: Waiting up to 5m0s for pod "pod-secrets-7f100547-510c-4e18-aa97-326a462de0df" in namespace "secrets-867" to be "Succeeded or Failed" Oct 5 17:33:29.918: INFO: Pod "pod-secrets-7f100547-510c-4e18-aa97-326a462de0df": Phase="Pending", Reason="", readiness=false. Elapsed: 176.908771ms Oct 5 17:33:31.921: INFO: Pod "pod-secrets-7f100547-510c-4e18-aa97-326a462de0df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180603602s Oct 5 17:33:33.953: INFO: Pod "pod-secrets-7f100547-510c-4e18-aa97-326a462de0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.212382337s STEP: Saw pod success Oct 5 17:33:33.953: INFO: Pod "pod-secrets-7f100547-510c-4e18-aa97-326a462de0df" satisfied condition "Succeeded or Failed" Oct 5 17:33:33.955: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7f100547-510c-4e18-aa97-326a462de0df container secret-volume-test: STEP: delete the pod Oct 5 17:33:34.374: INFO: Waiting for pod pod-secrets-7f100547-510c-4e18-aa97-326a462de0df to disappear Oct 5 17:33:34.384: INFO: Pod pod-secrets-7f100547-510c-4e18-aa97-326a462de0df no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:34.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-867" for this suite. • [SLOW TEST:5.078 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:34.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-d3f290fd-8374-4bdc-a4d6-05cc0bef19d4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d3f290fd-8374-4bdc-a4d6-05cc0bef19d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:40.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9998" for this suite. • [SLOW TEST:6.305 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2629,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:40.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Oct 5 17:33:40.774: INFO: created test-pod-1 Oct 5 17:33:40.787: INFO: created test-pod-2 Oct 5 17:33:40.840: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:33:40.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7373" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":156,"skipped":2638,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:33:41.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1005 17:33:42.193362 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 17:34:44.604: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:34:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3875" for this suite. • [SLOW TEST:63.609 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":157,"skipped":2654,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:34:44.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c Oct 5 17:34:44.734: INFO: Pod name my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c: Found 0 pods out of 1 Oct 5 17:34:49.748: INFO: Pod name my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c: Found 1 pods out of 1 Oct 5 17:34:49.748: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c" are running Oct 5 17:34:49.759: INFO: Pod "my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c-vvvpt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:34:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:34:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:34:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 17:34:44 +0000 UTC Reason: Message:}]) Oct 5 17:34:49.760: INFO: Trying to dial the pod Oct 5 17:34:54.773: INFO: Controller my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c: Got expected result from replica 1 [my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c-vvvpt]: "my-hostname-basic-0b953253-de0a-4351-8025-32bb86e09d9c-vvvpt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:34:54.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6897" for this suite. • [SLOW TEST:10.168 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":158,"skipped":2666,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:34:54.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 17:34:54.851: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5252" for this suite. • [SLOW TEST:8.601 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":159,"skipped":2693,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:03.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 17:35:06.493: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:06.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2788" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2707,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:06.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 5 17:35:06.798: INFO: Waiting up to 5m0s for pod "pod-04018c02-76d7-46a3-b044-0dce8412cb4a" in namespace "emptydir-4708" to be "Succeeded or Failed" Oct 5 17:35:06.802: INFO: Pod "pod-04018c02-76d7-46a3-b044-0dce8412cb4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227717ms Oct 5 17:35:08.811: INFO: Pod "pod-04018c02-76d7-46a3-b044-0dce8412cb4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013053818s Oct 5 17:35:10.816: INFO: Pod "pod-04018c02-76d7-46a3-b044-0dce8412cb4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017784524s STEP: Saw pod success Oct 5 17:35:10.816: INFO: Pod "pod-04018c02-76d7-46a3-b044-0dce8412cb4a" satisfied condition "Succeeded or Failed" Oct 5 17:35:10.818: INFO: Trying to get logs from node latest-worker pod pod-04018c02-76d7-46a3-b044-0dce8412cb4a container test-container: STEP: delete the pod Oct 5 17:35:10.902: INFO: Waiting for pod pod-04018c02-76d7-46a3-b044-0dce8412cb4a to disappear Oct 5 17:35:10.914: INFO: Pod pod-04018c02-76d7-46a3-b044-0dce8412cb4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:10.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4708" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2710,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:10.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 5 17:35:10.982: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-782' Oct 5 17:35:11.339: INFO: stderr: "" Oct 5 17:35:11.339: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 17:35:11.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:11.563: INFO: stderr: "" Oct 5 17:35:11.563: INFO: stdout: "update-demo-nautilus-l6jxw update-demo-nautilus-xnfnj " Oct 5 17:35:11.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l6jxw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:11.706: INFO: stderr: "" Oct 5 17:35:11.706: INFO: stdout: "" Oct 5 17:35:11.706: INFO: update-demo-nautilus-l6jxw is created but not running Oct 5 17:35:16.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:16.810: INFO: stderr: "" Oct 5 17:35:16.810: INFO: stdout: "update-demo-nautilus-l6jxw update-demo-nautilus-xnfnj " Oct 5 17:35:16.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l6jxw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:16.910: INFO: stderr: "" Oct 5 17:35:16.910: INFO: stdout: "true" Oct 5 17:35:16.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l6jxw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:17.019: INFO: stderr: "" Oct 5 17:35:17.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:35:17.019: INFO: validating pod update-demo-nautilus-l6jxw Oct 5 17:35:17.023: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:35:17.023: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:35:17.023: INFO: update-demo-nautilus-l6jxw is verified up and running Oct 5 17:35:17.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:17.128: INFO: stderr: "" Oct 5 17:35:17.128: INFO: stdout: "true" Oct 5 17:35:17.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:17.237: INFO: stderr: "" Oct 5 17:35:17.237: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:35:17.237: INFO: validating pod update-demo-nautilus-xnfnj Oct 5 17:35:17.241: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:35:17.241: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:35:17.241: INFO: update-demo-nautilus-xnfnj is verified up and running STEP: scaling down the replication controller Oct 5 17:35:17.244: INFO: scanned /root for discovery docs: Oct 5 17:35:17.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-782' Oct 5 17:35:18.447: INFO: stderr: "" Oct 5 17:35:18.447: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 17:35:18.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:18.553: INFO: stderr: "" Oct 5 17:35:18.553: INFO: stdout: "update-demo-nautilus-l6jxw update-demo-nautilus-xnfnj " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 5 17:35:23.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:23.666: INFO: stderr: "" Oct 5 17:35:23.666: INFO: stdout: "update-demo-nautilus-xnfnj " Oct 5 17:35:23.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:23.760: INFO: stderr: "" Oct 5 17:35:23.760: INFO: stdout: "true" Oct 5 17:35:23.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:23.893: INFO: stderr: "" Oct 5 17:35:23.893: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:35:23.893: INFO: validating pod update-demo-nautilus-xnfnj Oct 5 17:35:23.899: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:35:23.899: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:35:23.899: INFO: update-demo-nautilus-xnfnj is verified up and running STEP: scaling up the replication controller Oct 5 17:35:23.904: INFO: scanned /root for discovery docs: Oct 5 17:35:23.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-782' Oct 5 17:35:25.094: INFO: stderr: "" Oct 5 17:35:25.094: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 17:35:25.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:25.200: INFO: stderr: "" Oct 5 17:35:25.200: INFO: stdout: "update-demo-nautilus-r457g update-demo-nautilus-xnfnj " Oct 5 17:35:25.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r457g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:25.301: INFO: stderr: "" Oct 5 17:35:25.301: INFO: stdout: "" Oct 5 17:35:25.301: INFO: update-demo-nautilus-r457g is created but not running Oct 5 17:35:30.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-782' Oct 5 17:35:30.417: INFO: stderr: "" Oct 5 17:35:30.417: INFO: stdout: "update-demo-nautilus-r457g update-demo-nautilus-xnfnj " Oct 5 17:35:30.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r457g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:30.518: INFO: stderr: "" Oct 5 17:35:30.519: INFO: stdout: "true" Oct 5 17:35:30.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r457g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:30.615: INFO: stderr: "" Oct 5 17:35:30.615: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:35:30.615: INFO: validating pod update-demo-nautilus-r457g Oct 5 17:35:30.619: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:35:30.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:35:30.619: INFO: update-demo-nautilus-r457g is verified up and running Oct 5 17:35:30.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:30.726: INFO: stderr: "" Oct 5 17:35:30.726: INFO: stdout: "true" Oct 5 17:35:30.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xnfnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-782' Oct 5 17:35:30.842: INFO: stderr: "" Oct 5 17:35:30.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 17:35:30.842: INFO: validating pod update-demo-nautilus-xnfnj Oct 5 17:35:30.846: INFO: got data: { "image": "nautilus.jpg" } Oct 5 17:35:30.846: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 17:35:30.846: INFO: update-demo-nautilus-xnfnj is verified up and running STEP: using delete to clean up resources Oct 5 17:35:30.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-782' Oct 5 17:35:30.963: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:35:30.963: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 5 17:35:30.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-782' Oct 5 17:35:31.060: INFO: stderr: "No resources found in kubectl-782 namespace.\n" Oct 5 17:35:31.060: INFO: stdout: "" Oct 5 17:35:31.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-782 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 17:35:31.159: INFO: stderr: "" Oct 5 17:35:31.159: INFO: stdout: "update-demo-nautilus-r457g\nupdate-demo-nautilus-xnfnj\n" Oct 5 17:35:31.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-782' Oct 5 17:35:31.838: INFO: stderr: "No resources found in kubectl-782 namespace.\n" Oct 5 17:35:31.838: INFO: stdout: "" Oct 5 17:35:31.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-782 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 17:35:31.966: INFO: stderr: "" Oct 5 17:35:31.966: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:31.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-782" for this suite. • [SLOW TEST:21.051 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":162,"skipped":2711,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:31.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:35:32.049: INFO: Creating deployment "test-recreate-deployment" Oct 5 17:35:32.073: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 5 17:35:32.125: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 5 17:35:34.235: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 5 17:35:34.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516132, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516132, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516132, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516132, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:35:36.241: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 5 17:35:36.249: INFO: Updating deployment test-recreate-deployment Oct 5 17:35:36.249: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 17:35:36.864: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2877 /apis/apps/v1/namespaces/deployment-2877/deployments/test-recreate-deployment f7976be4-0c8e-4341-97aa-84739d00b00e 3407051 2 2020-10-05 17:35:32 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004886e88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-05 17:35:36 +0000 UTC,LastTransitionTime:2020-10-05 17:35:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-10-05 17:35:36 +0000 UTC,LastTransitionTime:2020-10-05 17:35:32 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 5 17:35:36.868: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-2877 /apis/apps/v1/namespaces/deployment-2877/replicasets/test-recreate-deployment-f79dd4667 3bf1260e-7e4a-4ea8-b855-ef0e065ab2a8 3407049 1 2020-10-05 17:35:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f7976be4-0c8e-4341-97aa-84739d00b00e 0xc0057ea1d0 0xc0057ea1d1}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7976be4-0c8e-4341-97aa-84739d00b00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0057ea248 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:35:36.868: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 5 17:35:36.869: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-2877 /apis/apps/v1/namespaces/deployment-2877/replicasets/test-recreate-deployment-c96cf48f 0392da9d-8b63-41a5-ae5f-a5f6555a192b 3407040 2 2020-10-05 17:35:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f7976be4-0c8e-4341-97aa-84739d00b00e 0xc0057ea0df 0xc0057ea0f0}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7976be4-0c8e-4341-97aa-84739d00b00e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0057ea168 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:35:36.894: INFO: Pod "test-recreate-deployment-f79dd4667-gk7ql" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-gk7ql test-recreate-deployment-f79dd4667- deployment-2877 /api/v1/namespaces/deployment-2877/pods/test-recreate-deployment-f79dd4667-gk7ql 738978b0-6727-463d-9ad7-570530314386 3407052 0 2020-10-05 17:35:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 3bf1260e-7e4a-4ea8-b855-ef0e065ab2a8 0xc0057ea720 0xc0057ea721}] [] [{kube-controller-manager Update v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bf1260e-7e4a-4ea8-b855-ef0e065ab2a8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 17:35:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hvj72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hvj72,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hvj72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:35:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:35:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:35:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:35:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-05 17:35:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:36.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2877" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":163,"skipped":2731,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:36.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 5 17:35:36.966: INFO: >>> kubeConfig: /root/.kube/config Oct 5 17:35:39.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:50.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5364" for this suite. • [SLOW TEST:13.952 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":164,"skipped":2739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:50.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:35:50.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca" in namespace "downward-api-341" to be "Succeeded or Failed" Oct 5 17:35:50.966: INFO: Pod "downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca": Phase="Pending", Reason="", readiness=false. Elapsed: 52.894804ms Oct 5 17:35:53.038: INFO: Pod "downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125058449s Oct 5 17:35:55.043: INFO: Pod "downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129081589s STEP: Saw pod success Oct 5 17:35:55.043: INFO: Pod "downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca" satisfied condition "Succeeded or Failed" Oct 5 17:35:55.046: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca container client-container: STEP: delete the pod Oct 5 17:35:55.093: INFO: Waiting for pod downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca to disappear Oct 5 17:35:55.122: INFO: Pod downwardapi-volume-a9a58199-6a53-40a0-bbd7-19a0e0e20aca no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:35:55.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-341" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":165,"skipped":2762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:35:55.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7202 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 17:35:55.396: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 17:35:55.565: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:35:57.570: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:35:59.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:01.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:03.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:05.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:07.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:09.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:11.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:13.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:15.569: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:17.570: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:36:19.570: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 17:36:19.575: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 17:36:23.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.7:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7202 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:36:23.636: INFO: >>> kubeConfig: /root/.kube/config I1005 17:36:23.667950 7 log.go:181] (0xc00662a580) (0xc002740a00) Create stream I1005 17:36:23.667986 7 log.go:181] (0xc00662a580) (0xc002740a00) Stream added, broadcasting: 1 I1005 17:36:23.670303 7 log.go:181] (0xc00662a580) Reply frame received for 1 I1005 17:36:23.670349 7 log.go:181] (0xc00662a580) (0xc00250cb40) Create stream I1005 17:36:23.670359 7 log.go:181] (0xc00662a580) (0xc00250cb40) Stream added, broadcasting: 3 I1005 17:36:23.671524 7 log.go:181] (0xc00662a580) Reply frame received for 3 I1005 17:36:23.671569 7 log.go:181] (0xc00662a580) (0xc0012b5ae0) Create stream I1005 17:36:23.671583 7 log.go:181] (0xc00662a580) (0xc0012b5ae0) Stream added, broadcasting: 5 I1005 17:36:23.672525 7 log.go:181] (0xc00662a580) Reply frame received for 5 I1005 17:36:23.741613 7 log.go:181] (0xc00662a580) Data frame received for 5 I1005 17:36:23.741655 7 log.go:181] (0xc0012b5ae0) (5) Data frame handling I1005 17:36:23.741707 7 log.go:181] (0xc00662a580) Data frame received for 3 I1005 17:36:23.741753 7 log.go:181] (0xc00250cb40) (3) Data frame handling I1005 17:36:23.741800 7 log.go:181] (0xc00250cb40) (3) Data frame sent I1005 17:36:23.741829 7 log.go:181] (0xc00662a580) Data frame received for 3 I1005 17:36:23.741845 7 log.go:181] (0xc00250cb40) (3) Data frame handling I1005 17:36:23.743817 7 log.go:181] (0xc00662a580) Data frame received for 1 I1005 17:36:23.743842 7 log.go:181] (0xc002740a00) (1) Data frame handling I1005 17:36:23.743865 7 log.go:181] (0xc002740a00) (1) Data frame sent I1005 17:36:23.744022 7 log.go:181] (0xc00662a580) (0xc002740a00) Stream removed, broadcasting: 1 I1005 17:36:23.744056 7 log.go:181] (0xc00662a580) Go away received I1005 17:36:23.744181 7 log.go:181] (0xc00662a580) (0xc002740a00) Stream removed, broadcasting: 1 I1005 17:36:23.744215 7 log.go:181] (0xc00662a580) (0xc00250cb40) Stream removed, broadcasting: 3 I1005 17:36:23.744237 7 log.go:181] (0xc00662a580) (0xc0012b5ae0) Stream removed, broadcasting: 5 Oct 5 17:36:23.744: INFO: Found all expected endpoints: [netserver-0] Oct 5 17:36:23.748: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.247:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7202 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:36:23.748: INFO: >>> kubeConfig: /root/.kube/config I1005 17:36:23.781044 7 log.go:181] (0xc0042286e0) (0xc0012b5ea0) Create stream I1005 17:36:23.781075 7 log.go:181] (0xc0042286e0) (0xc0012b5ea0) Stream added, broadcasting: 1 I1005 17:36:23.783361 7 log.go:181] (0xc0042286e0) Reply frame received for 1 I1005 17:36:23.783414 7 log.go:181] (0xc0042286e0) (0xc002740aa0) Create stream I1005 17:36:23.783473 7 log.go:181] (0xc0042286e0) (0xc002740aa0) Stream added, broadcasting: 3 I1005 17:36:23.784645 7 log.go:181] (0xc0042286e0) Reply frame received for 3 I1005 17:36:23.784693 7 log.go:181] (0xc0042286e0) (0xc002740b40) Create stream I1005 17:36:23.784709 7 log.go:181] (0xc0042286e0) (0xc002740b40) Stream added, broadcasting: 5 I1005 17:36:23.785906 7 log.go:181] (0xc0042286e0) Reply frame received for 5 I1005 17:36:23.851124 7 log.go:181] (0xc0042286e0) Data frame received for 3 I1005 17:36:23.851159 7 log.go:181] (0xc002740aa0) (3) Data frame handling I1005 17:36:23.851182 7 log.go:181] (0xc002740aa0) (3) Data frame sent I1005 17:36:23.851196 7 log.go:181] (0xc0042286e0) Data frame received for 3 I1005 17:36:23.851210 7 log.go:181] (0xc002740aa0) (3) Data frame handling I1005 17:36:23.851308 7 log.go:181] (0xc0042286e0) Data frame received for 5 I1005 17:36:23.851332 7 log.go:181] (0xc002740b40) (5) Data frame handling I1005 17:36:23.853065 7 log.go:181] (0xc0042286e0) Data frame received for 1 I1005 17:36:23.853109 7 log.go:181] (0xc0012b5ea0) (1) Data frame handling I1005 17:36:23.853141 7 log.go:181] (0xc0012b5ea0) (1) Data frame sent I1005 17:36:23.853167 7 log.go:181] (0xc0042286e0) (0xc0012b5ea0) Stream removed, broadcasting: 1 I1005 17:36:23.853182 7 log.go:181] (0xc0042286e0) Go away received I1005 17:36:23.853288 7 log.go:181] (0xc0042286e0) (0xc0012b5ea0) Stream removed, broadcasting: 1 I1005 17:36:23.853318 7 log.go:181] (0xc0042286e0) (0xc002740aa0) Stream removed, broadcasting: 3 I1005 17:36:23.853328 7 log.go:181] (0xc0042286e0) (0xc002740b40) Stream removed, broadcasting: 5 Oct 5 17:36:23.853: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:36:23.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7202" for this suite. • [SLOW TEST:28.731 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2833,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:36:23.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-24d44b40-1ce1-4fab-a3a4-0b88d17ad780 in namespace container-probe-4712 Oct 5 17:36:27.969: INFO: Started pod busybox-24d44b40-1ce1-4fab-a3a4-0b88d17ad780 in namespace container-probe-4712 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 17:36:27.972: INFO: Initial restart count of pod busybox-24d44b40-1ce1-4fab-a3a4-0b88d17ad780 is 0 Oct 5 17:37:18.603: INFO: Restart count of pod container-probe-4712/busybox-24d44b40-1ce1-4fab-a3a4-0b88d17ad780 is now 1 (50.630861996s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:37:18.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4712" for this suite. • [SLOW TEST:54.795 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2838,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:37:18.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2461dc9e-7929-405b-89a3-1889fbb5fe03 STEP: Creating a pod to test consume configMaps Oct 5 17:37:18.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054" in namespace "configmap-5353" to be "Succeeded or Failed" Oct 5 17:37:18.733: INFO: Pod "pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05666ms Oct 5 17:37:20.788: INFO: Pod "pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064832235s Oct 5 17:37:22.792: INFO: Pod "pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068973872s STEP: Saw pod success Oct 5 17:37:22.792: INFO: Pod "pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054" satisfied condition "Succeeded or Failed" Oct 5 17:37:22.795: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054 container configmap-volume-test: STEP: delete the pod Oct 5 17:37:22.813: INFO: Waiting for pod pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054 to disappear Oct 5 17:37:22.842: INFO: Pod pod-configmaps-4924adca-c244-43a2-96b8-c46c6d2ae054 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:37:22.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5353" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:37:22.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:37:22.939: INFO: Waiting up to 5m0s for pod "busybox-user-65534-277e6d4c-791c-41cf-8627-fdb150e1a382" in namespace "security-context-test-8301" to be "Succeeded or Failed" Oct 5 17:37:22.952: INFO: Pod "busybox-user-65534-277e6d4c-791c-41cf-8627-fdb150e1a382": Phase="Pending", Reason="", readiness=false. Elapsed: 13.497342ms Oct 5 17:37:24.958: INFO: Pod "busybox-user-65534-277e6d4c-791c-41cf-8627-fdb150e1a382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01885688s Oct 5 17:37:26.961: INFO: Pod "busybox-user-65534-277e6d4c-791c-41cf-8627-fdb150e1a382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022324667s Oct 5 17:37:26.961: INFO: Pod "busybox-user-65534-277e6d4c-791c-41cf-8627-fdb150e1a382" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:37:26.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8301" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2906,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:37:26.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2954, will wait for the garbage collector to delete the pods Oct 5 17:37:33.232: INFO: Deleting Job.batch foo took: 6.683946ms Oct 5 17:37:33.633: INFO: Terminating Job.batch foo pods took: 400.199676ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:06.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2954" for this suite. • [SLOW TEST:39.478 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":170,"skipped":2911,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:06.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:10.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9624" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2922,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:10.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ps7w6 in namespace proxy-8581 I1005 17:38:10.643591 7 runners.go:190] Created replication controller with name: proxy-service-ps7w6, namespace: proxy-8581, replica count: 1 I1005 17:38:11.693998 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:38:12.694224 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:38:13.694430 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1005 17:38:14.694720 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1005 17:38:15.694981 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1005 17:38:16.695207 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1005 17:38:17.695455 7 runners.go:190] proxy-service-ps7w6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:38:17.700: INFO: setup took 7.114766807s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 5 17:38:17.708: INFO: (0) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 7.83809ms) Oct 5 17:38:17.708: INFO: (0) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 7.963803ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 9.56902ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 9.605186ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 9.592433ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 9.692518ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 9.645459ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 9.902786ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 9.812611ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 9.710937ms) Oct 5 17:38:17.710: INFO: (0) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 9.822377ms) Oct 5 17:38:17.716: INFO: (0) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 16.115108ms) Oct 5 17:38:17.716: INFO: (0) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 16.405418ms) Oct 5 17:38:17.716: INFO: (0) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 16.377583ms) Oct 5 17:38:17.716: INFO: (0) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 16.29656ms) Oct 5 17:38:17.716: INFO: (0) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 4.65526ms) Oct 5 17:38:17.721: INFO: (1) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 5.129669ms) Oct 5 17:38:17.723: INFO: (1) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 6.566153ms) Oct 5 17:38:17.723: INFO: (1) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 6.6459ms) Oct 5 17:38:17.723: INFO: (1) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 6.62727ms) Oct 5 17:38:17.724: INFO: (1) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 7.325589ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 8.103686ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 8.285191ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 8.293627ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 8.160885ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 8.367227ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 8.253051ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 8.372412ms) Oct 5 17:38:17.725: INFO: (1) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 8.343291ms) Oct 5 17:38:17.730: INFO: (2) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.806164ms) Oct 5 17:38:17.730: INFO: (2) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 5.195948ms) Oct 5 17:38:17.730: INFO: (2) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.303346ms) Oct 5 17:38:17.730: INFO: (2) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 5.192001ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 6.697611ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 6.746449ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 7.072372ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 7.060309ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 7.074655ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 7.225844ms) Oct 5 17:38:17.732: INFO: (2) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 8.307527ms) Oct 5 17:38:17.740: INFO: (3) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 6.488144ms) Oct 5 17:38:17.740: INFO: (3) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 6.444728ms) Oct 5 17:38:17.740: INFO: (3) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 6.616819ms) Oct 5 17:38:17.740: INFO: (3) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 7.921159ms) Oct 5 17:38:17.741: INFO: (3) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 7.896803ms) Oct 5 17:38:17.741: INFO: (3) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 7.984526ms) Oct 5 17:38:17.741: INFO: (3) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 7.991176ms) Oct 5 17:38:17.741: INFO: (3) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 8.036089ms) Oct 5 17:38:17.741: INFO: (3) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 8.193174ms) Oct 5 17:38:17.742: INFO: (3) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 8.375187ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.073922ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.336678ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.611747ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.615065ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 3.680186ms) Oct 5 17:38:17.745: INFO: (4) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 4.802641ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 4.811333ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.851579ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.854623ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.848678ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.941464ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.416829ms) Oct 5 17:38:17.747: INFO: (4) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 5.649132ms) Oct 5 17:38:17.751: INFO: (5) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.037121ms) Oct 5 17:38:17.751: INFO: (5) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.765133ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 4.26453ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.422056ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.412599ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.434004ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.540262ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.800102ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.789575ms) Oct 5 17:38:17.752: INFO: (5) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.89902ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.07853ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 5.126936ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 5.105274ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 5.129996ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 5.112768ms) Oct 5 17:38:17.753: INFO: (5) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 3.779562ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.806431ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.785085ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.944721ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test (200; 4.269559ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.23014ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.452671ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.469034ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 4.439241ms) Oct 5 17:38:17.757: INFO: (6) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.62756ms) Oct 5 17:38:17.758: INFO: (6) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.318428ms) Oct 5 17:38:17.758: INFO: (6) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.536369ms) Oct 5 17:38:17.758: INFO: (6) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.520468ms) Oct 5 17:38:17.762: INFO: (7) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 3.890069ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.133479ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.142164ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.191732ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.173031ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 4.343149ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.450771ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 4.41283ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 4.431795ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.414986ms) Oct 5 17:38:17.763: INFO: (7) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 3.266386ms) Oct 5 17:38:17.767: INFO: (8) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.373473ms) Oct 5 17:38:17.767: INFO: (8) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 3.406912ms) Oct 5 17:38:17.767: INFO: (8) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 5.322889ms) Oct 5 17:38:17.769: INFO: (8) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 5.439219ms) Oct 5 17:38:17.769: INFO: (8) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 5.429977ms) Oct 5 17:38:17.769: INFO: (8) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 5.548069ms) Oct 5 17:38:17.769: INFO: (8) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 5.607121ms) Oct 5 17:38:17.769: INFO: (8) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 6.20657ms) Oct 5 17:38:17.770: INFO: (8) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 7.086566ms) Oct 5 17:38:17.770: INFO: (8) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 7.149455ms) Oct 5 17:38:17.770: INFO: (8) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 7.19121ms) Oct 5 17:38:17.770: INFO: (8) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 7.225161ms) Oct 5 17:38:17.770: INFO: (8) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 7.225199ms) Oct 5 17:38:17.774: INFO: (9) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.301458ms) Oct 5 17:38:17.774: INFO: (9) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.287481ms) Oct 5 17:38:17.775: INFO: (9) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 4.248888ms) Oct 5 17:38:17.775: INFO: (9) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.205922ms) Oct 5 17:38:17.775: INFO: (9) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.82082ms) Oct 5 17:38:17.775: INFO: (9) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.790525ms) Oct 5 17:38:17.775: INFO: (9) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.837682ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.003809ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 5.046056ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 5.063439ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.14435ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 5.192712ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 5.129368ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.17518ms) Oct 5 17:38:17.776: INFO: (9) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 5.282207ms) Oct 5 17:38:17.779: INFO: (10) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 3.577515ms) Oct 5 17:38:17.779: INFO: (10) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 3.675562ms) Oct 5 17:38:17.779: INFO: (10) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 3.69627ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.688117ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.198078ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.189844ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.213546ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.240438ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.314723ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 4.254222ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.632389ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.673896ms) Oct 5 17:38:17.780: INFO: (10) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.624158ms) Oct 5 17:38:17.781: INFO: (10) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 4.88378ms) Oct 5 17:38:17.781: INFO: (10) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.107418ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 3.532123ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.522389ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.679833ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.947262ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 4.099332ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.242063ms) Oct 5 17:38:17.785: INFO: (11) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.335109ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.925874ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 5.018144ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.009574ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 5.08115ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 5.116322ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.100485ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.14201ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 5.083952ms) Oct 5 17:38:17.786: INFO: (11) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 3.101177ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.457494ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 4.032983ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 4.490555ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 3.961475ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.792631ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.734031ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 4.800134ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.645853ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.733547ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 5.206898ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.991218ms) Oct 5 17:38:17.791: INFO: (12) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test (200; 3.2579ms) Oct 5 17:38:17.795: INFO: (13) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 3.558996ms) Oct 5 17:38:17.795: INFO: (13) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.773629ms) Oct 5 17:38:17.795: INFO: (13) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 3.942949ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.130117ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 4.203701ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.41789ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.493988ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 4.496875ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 4.480342ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.702682ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.698393ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.701984ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.782792ms) Oct 5 17:38:17.796: INFO: (13) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.754944ms) Oct 5 17:38:17.799: INFO: (14) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 2.085476ms) Oct 5 17:38:17.800: INFO: (14) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.233784ms) Oct 5 17:38:17.800: INFO: (14) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 3.332458ms) Oct 5 17:38:17.800: INFO: (14) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 3.977343ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.617382ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.707891ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.649696ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.652109ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.657785ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 4.74639ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 4.745914ms) Oct 5 17:38:17.801: INFO: (14) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test (200; 2.015698ms) Oct 5 17:38:17.804: INFO: (15) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 2.346673ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.021754ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.077464ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.094742ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 4.117508ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.110139ms) Oct 5 17:38:17.805: INFO: (15) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.19284ms) Oct 5 17:38:17.806: INFO: (15) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 4.219594ms) Oct 5 17:38:17.806: INFO: (15) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.390352ms) Oct 5 17:38:17.806: INFO: (15) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.488977ms) Oct 5 17:38:17.806: INFO: (15) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.462958ms) Oct 5 17:38:17.809: INFO: (16) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 3.089077ms) Oct 5 17:38:17.809: INFO: (16) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 3.40668ms) Oct 5 17:38:17.809: INFO: (16) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.518994ms) Oct 5 17:38:17.810: INFO: (16) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.636879ms) Oct 5 17:38:17.810: INFO: (16) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.618351ms) Oct 5 17:38:17.810: INFO: (16) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 4.592374ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 4.666941ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 4.798866ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.8202ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 4.810431ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 4.851083ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 4.843664ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 4.870629ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 4.957223ms) Oct 5 17:38:17.811: INFO: (16) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: ... (200; 3.66481ms) Oct 5 17:38:17.815: INFO: (17) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.720201ms) Oct 5 17:38:17.815: INFO: (17) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 4.466787ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 5.275319ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.3048ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 5.303583ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:462/proxy/: tls qux (200; 5.378458ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.316509ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 5.377059ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 5.355033ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.39596ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 5.422155ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 5.341558ms) Oct 5 17:38:17.816: INFO: (17) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test<... (200; 3.670471ms) Oct 5 17:38:17.820: INFO: (18) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 3.702149ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 5.239874ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 5.228682ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 5.329334ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 5.266113ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5/proxy/: test (200; 5.246347ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 5.302885ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 5.268957ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 5.328262ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 5.393453ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 5.354458ms) Oct 5 17:38:17.822: INFO: (18) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 5.403846ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:1080/proxy/: ... (200; 2.603884ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 2.624751ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/proxy-service-ps7w6-qw7r5:1080/proxy/: test<... (200; 2.669573ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:443/proxy/: test (200; 2.910028ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:162/proxy/: bar (200; 2.971164ms) Oct 5 17:38:17.825: INFO: (19) /api/v1/namespaces/proxy-8581/pods/https:proxy-service-ps7w6-qw7r5:460/proxy/: tls baz (200; 3.006643ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/pods/http:proxy-service-ps7w6-qw7r5:160/proxy/: foo (200; 3.505028ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname2/proxy/: bar (200; 3.8132ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname1/proxy/: foo (200; 3.804619ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname1/proxy/: tls baz (200; 3.861801ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/proxy-service-ps7w6:portname2/proxy/: bar (200; 3.871081ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/http:proxy-service-ps7w6:portname1/proxy/: foo (200; 3.902914ms) Oct 5 17:38:17.826: INFO: (19) /api/v1/namespaces/proxy-8581/services/https:proxy-service-ps7w6:tlsportname2/proxy/: tls qux (200; 4.064188ms) STEP: deleting ReplicationController proxy-service-ps7w6 in namespace proxy-8581, will wait for the garbage collector to delete the pods Oct 5 17:38:17.904: INFO: Deleting ReplicationController proxy-service-ps7w6 took: 26.150931ms Oct 5 17:38:20.204: INFO: Terminating ReplicationController proxy-service-ps7w6 pods took: 2.300255757s [AfterEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:29.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8581" for this suite. • [SLOW TEST:19.367 seconds] [sig-network] Proxy /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":172,"skipped":2939,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:29.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:30.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7006" for this suite. STEP: Destroying namespace "nspatchtest-2ba12709-d1d4-470d-92ef-e796039cb0e5-2904" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":173,"skipped":2945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:30.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 5 17:38:40.358: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 17:38:40.364: INFO: Pod pod-with-poststart-exec-hook still exists Oct 5 17:38:42.364: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 17:38:42.368: INFO: Pod pod-with-poststart-exec-hook still exists Oct 5 17:38:44.364: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 17:38:44.368: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:44.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1001" for this suite. • [SLOW TEST:14.243 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":3015,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:44.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 5 17:38:44.474: INFO: Waiting up to 5m0s for pod "pod-16edce57-ab03-469d-8516-ceb40c741adc" in namespace "emptydir-9221" to be "Succeeded or Failed" Oct 5 17:38:44.484: INFO: Pod "pod-16edce57-ab03-469d-8516-ceb40c741adc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.477346ms Oct 5 17:38:46.896: INFO: Pod "pod-16edce57-ab03-469d-8516-ceb40c741adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422109535s Oct 5 17:38:48.900: INFO: Pod "pod-16edce57-ab03-469d-8516-ceb40c741adc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.425874363s STEP: Saw pod success Oct 5 17:38:48.900: INFO: Pod "pod-16edce57-ab03-469d-8516-ceb40c741adc" satisfied condition "Succeeded or Failed" Oct 5 17:38:48.902: INFO: Trying to get logs from node latest-worker pod pod-16edce57-ab03-469d-8516-ceb40c741adc container test-container: STEP: delete the pod Oct 5 17:38:49.120: INFO: Waiting for pod pod-16edce57-ab03-469d-8516-ceb40c741adc to disappear Oct 5 17:38:49.197: INFO: Pod pod-16edce57-ab03-469d-8516-ceb40c741adc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:38:49.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9221" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":175,"skipped":3026,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:38:49.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:38:50.020: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:38:52.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516329, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:38:54.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516330, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516329, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:38:57.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:39:07.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7798" for this suite. STEP: Destroying namespace "webhook-7798-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.025 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":176,"skipped":3045,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:39:07.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1005 17:39:08.461447 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 17:40:10.479: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:10.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-359" for this suite. • [SLOW TEST:63.145 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":177,"skipped":3064,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:10.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:40:10.573: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 17:40:13.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8406 create -f -' Oct 5 17:40:18.537: INFO: stderr: "" Oct 5 17:40:18.537: INFO: stdout: "e2e-test-crd-publish-openapi-3534-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 5 17:40:18.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8406 delete e2e-test-crd-publish-openapi-3534-crds test-cr' Oct 5 17:40:18.653: INFO: stderr: "" Oct 5 17:40:18.653: INFO: stdout: "e2e-test-crd-publish-openapi-3534-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 5 17:40:18.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8406 apply -f -' Oct 5 17:40:18.923: INFO: stderr: "" Oct 5 17:40:18.923: INFO: stdout: "e2e-test-crd-publish-openapi-3534-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 5 17:40:18.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8406 delete e2e-test-crd-publish-openapi-3534-crds test-cr' Oct 5 17:40:19.041: INFO: stderr: "" Oct 5 17:40:19.041: INFO: stdout: "e2e-test-crd-publish-openapi-3534-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 5 17:40:19.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3534-crds' Oct 5 17:40:19.291: INFO: stderr: "" Oct 5 17:40:19.291: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3534-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:22.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8406" for this suite. • [SLOW TEST:11.747 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":178,"skipped":3072,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:22.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:40:23.097: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:40:25.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516423, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516423, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516423, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516423, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:40:28.142: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:28.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4837" for this suite. STEP: Destroying namespace "webhook-4837-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.991 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":179,"skipped":3083,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:28.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:44.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9875" for this suite. • [SLOW TEST:16.146 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":180,"skipped":3094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:44.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:48.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7285" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":3125,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:48.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7303.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7303.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 17:40:54.693: INFO: DNS probes using dns-7303/dns-test-14fff3a6-7a0e-4be4-b4dd-de76270afc51 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:40:54.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7303" for this suite. • [SLOW TEST:6.287 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":182,"skipped":3128,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:40:54.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-309 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-309 to expose endpoints map[] Oct 5 17:40:55.309: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Oct 5 17:40:56.326: INFO: successfully validated that service endpoint-test2 in namespace services-309 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-309 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-309 to expose endpoints map[pod1:[80]] Oct 5 17:41:00.401: INFO: successfully validated that service endpoint-test2 in namespace services-309 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-309 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-309 to expose endpoints map[pod1:[80] pod2:[80]] Oct 5 17:41:04.480: INFO: successfully validated that service endpoint-test2 in namespace services-309 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-309 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-309 to expose endpoints map[pod2:[80]] Oct 5 17:41:04.523: INFO: successfully validated that service endpoint-test2 in namespace services-309 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-309 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-309 to expose endpoints map[] Oct 5 17:41:05.551: INFO: successfully validated that service endpoint-test2 in namespace services-309 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:41:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-309" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.841 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":183,"skipped":3135,"failed":0} [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:41:05.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3849 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3849;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3849 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3849;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3849.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3849.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3849.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3849.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3849.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3849.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3849.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.40.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.40.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.40.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.40.193_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3849 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3849;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3849 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3849;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3849.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3849.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3849.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3849.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3849.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3849.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3849.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3849.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3849.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.40.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.40.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.40.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.40.193_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 17:41:12.110: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.118: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.120: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.122: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.125: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.149: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.174: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.176: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.178: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.181: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.183: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.188: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.190: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:12.207: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:17.216: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.220: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.225: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.234: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.250: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.252: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.254: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.259: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.265: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.268: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:17.282: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:22.212: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.215: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.218: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.220: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.222: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.224: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.226: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.256: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.282: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.285: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.287: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.292: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.302: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.305: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.308: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:22.326: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:27.215: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.218: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.221: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.225: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.237: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.260: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.264: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.267: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.270: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.274: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.298: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.301: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.304: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:27.322: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:32.212: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.215: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.218: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.221: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.234: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.281: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.284: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.288: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.294: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.298: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.300: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.303: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:32.323: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:37.213: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.216: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.222: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.234: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.255: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.258: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.261: INFO: Unable to read jessie_udp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.265: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849 from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.268: INFO: Unable to read jessie_udp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.274: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.277: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc from pod dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf: the server could not find the requested resource (get pods dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf) Oct 5 17:41:37.297: INFO: Lookups using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3849 wheezy_tcp@dns-test-service.dns-3849 wheezy_udp@dns-test-service.dns-3849.svc wheezy_tcp@dns-test-service.dns-3849.svc wheezy_udp@_http._tcp.dns-test-service.dns-3849.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3849.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3849 jessie_tcp@dns-test-service.dns-3849 jessie_udp@dns-test-service.dns-3849.svc jessie_tcp@dns-test-service.dns-3849.svc jessie_udp@_http._tcp.dns-test-service.dns-3849.svc jessie_tcp@_http._tcp.dns-test-service.dns-3849.svc] Oct 5 17:41:42.341: INFO: DNS probes using dns-3849/dns-test-2b8a7cce-d501-4709-abbd-f62cb905d7bf succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:41:42.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3849" for this suite. • [SLOW TEST:37.326 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":184,"skipped":3135,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:41:42.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:42:43.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-13" for this suite. • [SLOW TEST:60.546 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":3149,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:42:43.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 17:42:48.201: INFO: Successfully updated pod "labelsupdate84468db7-16e8-41bd-b133-31441c2dddf5" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:42:52.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7339" for this suite. • [SLOW TEST:8.730 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":3171,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:42:52.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 5 17:42:52.719: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 5 17:42:54.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516572, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516572, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516572, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516572, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:42:57.769: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:42:57.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:42:58.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7910" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.891 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":187,"skipped":3176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:42:59.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 5 17:43:05.730: INFO: Successfully updated pod "adopt-release-m2pdk" STEP: Checking that the Job readopts the Pod Oct 5 17:43:05.730: INFO: Waiting up to 15m0s for pod "adopt-release-m2pdk" in namespace "job-2138" to be "adopted" Oct 5 17:43:05.738: INFO: Pod "adopt-release-m2pdk": Phase="Running", Reason="", readiness=true. Elapsed: 8.203706ms Oct 5 17:43:07.743: INFO: Pod "adopt-release-m2pdk": Phase="Running", Reason="", readiness=true. Elapsed: 2.01305467s Oct 5 17:43:07.743: INFO: Pod "adopt-release-m2pdk" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 5 17:43:08.256: INFO: Successfully updated pod "adopt-release-m2pdk" STEP: Checking that the Job releases the Pod Oct 5 17:43:08.256: INFO: Waiting up to 15m0s for pod "adopt-release-m2pdk" in namespace "job-2138" to be "released" Oct 5 17:43:08.271: INFO: Pod "adopt-release-m2pdk": Phase="Running", Reason="", readiness=true. Elapsed: 14.376812ms Oct 5 17:43:10.322: INFO: Pod "adopt-release-m2pdk": Phase="Running", Reason="", readiness=true. Elapsed: 2.065932922s Oct 5 17:43:10.322: INFO: Pod "adopt-release-m2pdk" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:43:10.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2138" for this suite. • [SLOW TEST:11.359 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":188,"skipped":3200,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:43:10.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:43:11.387: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 5 17:43:11.424: INFO: Number of nodes with available pods: 0 Oct 5 17:43:11.424: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 5 17:43:11.526: INFO: Number of nodes with available pods: 0 Oct 5 17:43:11.526: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:12.530: INFO: Number of nodes with available pods: 0 Oct 5 17:43:12.530: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:13.988: INFO: Number of nodes with available pods: 0 Oct 5 17:43:13.988: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:14.559: INFO: Number of nodes with available pods: 0 Oct 5 17:43:14.559: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:15.540: INFO: Number of nodes with available pods: 1 Oct 5 17:43:15.540: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 5 17:43:15.625: INFO: Number of nodes with available pods: 1 Oct 5 17:43:15.625: INFO: Number of running nodes: 0, number of available pods: 1 Oct 5 17:43:16.632: INFO: Number of nodes with available pods: 0 Oct 5 17:43:16.632: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 5 17:43:16.661: INFO: Number of nodes with available pods: 0 Oct 5 17:43:16.661: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:17.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:17.665: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:18.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:18.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:19.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:19.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:20.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:20.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:21.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:21.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:22.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:22.665: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:23.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:23.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:24.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:24.665: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:25.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:25.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:26.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:26.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:27.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:27.665: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:28.665: INFO: Number of nodes with available pods: 0 Oct 5 17:43:28.665: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:29.666: INFO: Number of nodes with available pods: 0 Oct 5 17:43:29.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:30.689: INFO: Number of nodes with available pods: 0 Oct 5 17:43:30.689: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:31.809: INFO: Number of nodes with available pods: 0 Oct 5 17:43:31.809: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:32.664: INFO: Number of nodes with available pods: 0 Oct 5 17:43:32.664: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:43:33.666: INFO: Number of nodes with available pods: 1 Oct 5 17:43:33.666: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2298, will wait for the garbage collector to delete the pods Oct 5 17:43:33.745: INFO: Deleting DaemonSet.extensions daemon-set took: 20.04813ms Oct 5 17:43:34.145: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.219829ms Oct 5 17:43:39.848: INFO: Number of nodes with available pods: 0 Oct 5 17:43:39.848: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 17:43:39.850: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2298/daemonsets","resourceVersion":"3409569"},"items":null} Oct 5 17:43:39.853: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2298/pods","resourceVersion":"3409569"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:43:39.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2298" for this suite. • [SLOW TEST:29.394 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":189,"skipped":3212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:43:39.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-671e91d0-6d76-4c92-9b34-3bf1f63c0775 STEP: Creating a pod to test consume secrets Oct 5 17:43:39.963: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484" in namespace "projected-4453" to be "Succeeded or Failed" Oct 5 17:43:39.979: INFO: Pod "pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484": Phase="Pending", Reason="", readiness=false. Elapsed: 16.104278ms Oct 5 17:43:41.984: INFO: Pod "pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021638951s Oct 5 17:43:43.988: INFO: Pod "pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025538297s STEP: Saw pod success Oct 5 17:43:43.988: INFO: Pod "pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484" satisfied condition "Succeeded or Failed" Oct 5 17:43:43.991: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484 container projected-secret-volume-test: STEP: delete the pod Oct 5 17:43:44.036: INFO: Waiting for pod pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484 to disappear Oct 5 17:43:44.039: INFO: Pod pod-projected-secrets-2367c528-3ffd-4422-ba99-c83cb0db0484 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:43:44.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4453" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3262,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:43:44.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:43:44.272: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:43:45.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2221" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":191,"skipped":3266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:43:45.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Oct 5 17:45:45.988: INFO: Successfully updated pod "var-expansion-1f9c686c-c04c-4f39-a6b3-b3ef0aa70dc4" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 5 17:45:50.014: INFO: Deleting pod "var-expansion-1f9c686c-c04c-4f39-a6b3-b3ef0aa70dc4" in namespace "var-expansion-958" Oct 5 17:45:50.019: INFO: Wait up to 5m0s for pod "var-expansion-1f9c686c-c04c-4f39-a6b3-b3ef0aa70dc4" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:46:24.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-958" for this suite. • [SLOW TEST:158.742 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":192,"skipped":3299,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:46:24.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:46:24.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:46:26.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:46:28.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516784, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:46:31.745: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:46:31.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6187" for this suite. STEP: Destroying namespace "webhook-6187-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.933 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":193,"skipped":3307,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:46:31.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:46:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5717" for this suite. • [SLOW TEST:16.438 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":194,"skipped":3309,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:46:48.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Oct 5 17:46:48.485: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:46:48.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-746" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":195,"skipped":3310,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:46:48.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2185 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2185 I1005 17:46:48.823111 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2185, replica count: 2 I1005 17:46:51.873714 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:46:54.873976 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:46:57.874321 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:46:57.874: INFO: Creating new exec pod Oct 5 17:47:02.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2185 execpod6bnqk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 5 17:47:03.173: INFO: stderr: "I1005 17:47:03.080134 2629 log.go:181] (0xc00003a160) (0xc000a48960) Create stream\nI1005 17:47:03.080208 2629 log.go:181] (0xc00003a160) (0xc000a48960) Stream added, broadcasting: 1\nI1005 17:47:03.082481 2629 log.go:181] (0xc00003a160) Reply frame received for 1\nI1005 17:47:03.082551 2629 log.go:181] (0xc00003a160) (0xc0004fc000) Create stream\nI1005 17:47:03.082572 2629 log.go:181] (0xc00003a160) (0xc0004fc000) Stream added, broadcasting: 3\nI1005 17:47:03.083620 2629 log.go:181] (0xc00003a160) Reply frame received for 3\nI1005 17:47:03.083672 2629 log.go:181] (0xc00003a160) (0xc000c015e0) Create stream\nI1005 17:47:03.083689 2629 log.go:181] (0xc00003a160) (0xc000c015e0) Stream added, broadcasting: 5\nI1005 17:47:03.085083 2629 log.go:181] (0xc00003a160) Reply frame received for 5\nI1005 17:47:03.164602 2629 log.go:181] (0xc00003a160) Data frame received for 5\nI1005 17:47:03.164634 2629 log.go:181] (0xc000c015e0) (5) Data frame handling\nI1005 17:47:03.164654 2629 log.go:181] (0xc000c015e0) (5) Data frame sent\nI1005 17:47:03.164662 2629 log.go:181] (0xc00003a160) Data frame received for 5\nI1005 17:47:03.164668 2629 log.go:181] (0xc000c015e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1005 17:47:03.164753 2629 log.go:181] (0xc000c015e0) (5) Data frame sent\nI1005 17:47:03.165345 2629 log.go:181] (0xc00003a160) Data frame received for 3\nI1005 17:47:03.165374 2629 log.go:181] (0xc0004fc000) (3) Data frame handling\nI1005 17:47:03.165610 2629 log.go:181] (0xc00003a160) Data frame received for 5\nI1005 17:47:03.165639 2629 log.go:181] (0xc000c015e0) (5) Data frame handling\nI1005 17:47:03.167371 2629 log.go:181] (0xc00003a160) Data frame received for 1\nI1005 17:47:03.167395 2629 log.go:181] (0xc000a48960) (1) Data frame handling\nI1005 17:47:03.167409 2629 log.go:181] (0xc000a48960) (1) Data frame sent\nI1005 17:47:03.167423 2629 log.go:181] (0xc00003a160) (0xc000a48960) Stream removed, broadcasting: 1\nI1005 17:47:03.167438 2629 log.go:181] (0xc00003a160) Go away received\nI1005 17:47:03.167950 2629 log.go:181] (0xc00003a160) (0xc000a48960) Stream removed, broadcasting: 1\nI1005 17:47:03.167977 2629 log.go:181] (0xc00003a160) (0xc0004fc000) Stream removed, broadcasting: 3\nI1005 17:47:03.167990 2629 log.go:181] (0xc00003a160) (0xc000c015e0) Stream removed, broadcasting: 5\n" Oct 5 17:47:03.174: INFO: stdout: "" Oct 5 17:47:03.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2185 execpod6bnqk -- /bin/sh -x -c nc -zv -t -w 2 10.110.107.100 80' Oct 5 17:47:03.401: INFO: stderr: "I1005 17:47:03.313303 2647 log.go:181] (0xc0006c8000) (0xc000bc61e0) Create stream\nI1005 17:47:03.313362 2647 log.go:181] (0xc0006c8000) (0xc000bc61e0) Stream added, broadcasting: 1\nI1005 17:47:03.318427 2647 log.go:181] (0xc0006c8000) Reply frame received for 1\nI1005 17:47:03.318490 2647 log.go:181] (0xc0006c8000) (0xc000b7a000) Create stream\nI1005 17:47:03.318507 2647 log.go:181] (0xc0006c8000) (0xc000b7a000) Stream added, broadcasting: 3\nI1005 17:47:03.320267 2647 log.go:181] (0xc0006c8000) Reply frame received for 3\nI1005 17:47:03.320307 2647 log.go:181] (0xc0006c8000) (0xc00064a960) Create stream\nI1005 17:47:03.320326 2647 log.go:181] (0xc0006c8000) (0xc00064a960) Stream added, broadcasting: 5\nI1005 17:47:03.321528 2647 log.go:181] (0xc0006c8000) Reply frame received for 5\nI1005 17:47:03.392927 2647 log.go:181] (0xc0006c8000) Data frame received for 5\nI1005 17:47:03.392982 2647 log.go:181] (0xc00064a960) (5) Data frame handling\nI1005 17:47:03.393002 2647 log.go:181] (0xc00064a960) (5) Data frame sent\nI1005 17:47:03.393014 2647 log.go:181] (0xc0006c8000) Data frame received for 5\nI1005 17:47:03.393024 2647 log.go:181] (0xc00064a960) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.107.100 80\nConnection to 10.110.107.100 80 port [tcp/http] succeeded!\nI1005 17:47:03.393050 2647 log.go:181] (0xc0006c8000) Data frame received for 3\nI1005 17:47:03.393073 2647 log.go:181] (0xc000b7a000) (3) Data frame handling\nI1005 17:47:03.394677 2647 log.go:181] (0xc0006c8000) Data frame received for 1\nI1005 17:47:03.394718 2647 log.go:181] (0xc000bc61e0) (1) Data frame handling\nI1005 17:47:03.394740 2647 log.go:181] (0xc000bc61e0) (1) Data frame sent\nI1005 17:47:03.394762 2647 log.go:181] (0xc0006c8000) (0xc000bc61e0) Stream removed, broadcasting: 1\nI1005 17:47:03.394790 2647 log.go:181] (0xc0006c8000) Go away received\nI1005 17:47:03.395316 2647 log.go:181] (0xc0006c8000) (0xc000bc61e0) Stream removed, broadcasting: 1\nI1005 17:47:03.395345 2647 log.go:181] (0xc0006c8000) (0xc000b7a000) Stream removed, broadcasting: 3\nI1005 17:47:03.395357 2647 log.go:181] (0xc0006c8000) (0xc00064a960) Stream removed, broadcasting: 5\n" Oct 5 17:47:03.402: INFO: stdout: "" Oct 5 17:47:03.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2185 execpod6bnqk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30040' Oct 5 17:47:03.605: INFO: stderr: "I1005 17:47:03.532061 2666 log.go:181] (0xc000200370) (0xc0003de460) Create stream\nI1005 17:47:03.532127 2666 log.go:181] (0xc000200370) (0xc0003de460) Stream added, broadcasting: 1\nI1005 17:47:03.537326 2666 log.go:181] (0xc000200370) Reply frame received for 1\nI1005 17:47:03.537371 2666 log.go:181] (0xc000200370) (0xc0003dea00) Create stream\nI1005 17:47:03.537382 2666 log.go:181] (0xc000200370) (0xc0003dea00) Stream added, broadcasting: 3\nI1005 17:47:03.538403 2666 log.go:181] (0xc000200370) Reply frame received for 3\nI1005 17:47:03.538437 2666 log.go:181] (0xc000200370) (0xc000414000) Create stream\nI1005 17:47:03.538449 2666 log.go:181] (0xc000200370) (0xc000414000) Stream added, broadcasting: 5\nI1005 17:47:03.539271 2666 log.go:181] (0xc000200370) Reply frame received for 5\nI1005 17:47:03.597899 2666 log.go:181] (0xc000200370) Data frame received for 3\nI1005 17:47:03.597963 2666 log.go:181] (0xc0003dea00) (3) Data frame handling\nI1005 17:47:03.598003 2666 log.go:181] (0xc000200370) Data frame received for 5\nI1005 17:47:03.598027 2666 log.go:181] (0xc000414000) (5) Data frame handling\nI1005 17:47:03.598058 2666 log.go:181] (0xc000414000) (5) Data frame sent\nI1005 17:47:03.598097 2666 log.go:181] (0xc000200370) Data frame received for 5\nI1005 17:47:03.598114 2666 log.go:181] (0xc000414000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30040\nConnection to 172.18.0.15 30040 port [tcp/30040] succeeded!\nI1005 17:47:03.599539 2666 log.go:181] (0xc000200370) Data frame received for 1\nI1005 17:47:03.599570 2666 log.go:181] (0xc0003de460) (1) Data frame handling\nI1005 17:47:03.599607 2666 log.go:181] (0xc0003de460) (1) Data frame sent\nI1005 17:47:03.599637 2666 log.go:181] (0xc000200370) (0xc0003de460) Stream removed, broadcasting: 1\nI1005 17:47:03.599690 2666 log.go:181] (0xc000200370) Go away received\nI1005 17:47:03.600017 2666 log.go:181] (0xc000200370) (0xc0003de460) Stream removed, broadcasting: 1\nI1005 17:47:03.600037 2666 log.go:181] (0xc000200370) (0xc0003dea00) Stream removed, broadcasting: 3\nI1005 17:47:03.600052 2666 log.go:181] (0xc000200370) (0xc000414000) Stream removed, broadcasting: 5\n" Oct 5 17:47:03.606: INFO: stdout: "" Oct 5 17:47:03.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-2185 execpod6bnqk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30040' Oct 5 17:47:03.815: INFO: stderr: "I1005 17:47:03.737511 2684 log.go:181] (0xc000193970) (0xc000cf28c0) Create stream\nI1005 17:47:03.737573 2684 log.go:181] (0xc000193970) (0xc000cf28c0) Stream added, broadcasting: 1\nI1005 17:47:03.741873 2684 log.go:181] (0xc000193970) Reply frame received for 1\nI1005 17:47:03.741922 2684 log.go:181] (0xc000193970) (0xc000cf20a0) Create stream\nI1005 17:47:03.741934 2684 log.go:181] (0xc000193970) (0xc000cf20a0) Stream added, broadcasting: 3\nI1005 17:47:03.742920 2684 log.go:181] (0xc000193970) Reply frame received for 3\nI1005 17:47:03.742972 2684 log.go:181] (0xc000193970) (0xc000ba0d20) Create stream\nI1005 17:47:03.742996 2684 log.go:181] (0xc000193970) (0xc000ba0d20) Stream added, broadcasting: 5\nI1005 17:47:03.744004 2684 log.go:181] (0xc000193970) Reply frame received for 5\nI1005 17:47:03.808251 2684 log.go:181] (0xc000193970) Data frame received for 3\nI1005 17:47:03.808290 2684 log.go:181] (0xc000cf20a0) (3) Data frame handling\nI1005 17:47:03.808319 2684 log.go:181] (0xc000193970) Data frame received for 5\nI1005 17:47:03.808331 2684 log.go:181] (0xc000ba0d20) (5) Data frame handling\nI1005 17:47:03.808350 2684 log.go:181] (0xc000ba0d20) (5) Data frame sent\nI1005 17:47:03.808362 2684 log.go:181] (0xc000193970) Data frame received for 5\nI1005 17:47:03.808371 2684 log.go:181] (0xc000ba0d20) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30040\nConnection to 172.18.0.16 30040 port [tcp/30040] succeeded!\nI1005 17:47:03.809628 2684 log.go:181] (0xc000193970) Data frame received for 1\nI1005 17:47:03.809659 2684 log.go:181] (0xc000cf28c0) (1) Data frame handling\nI1005 17:47:03.809695 2684 log.go:181] (0xc000cf28c0) (1) Data frame sent\nI1005 17:47:03.809720 2684 log.go:181] (0xc000193970) (0xc000cf28c0) Stream removed, broadcasting: 1\nI1005 17:47:03.809741 2684 log.go:181] (0xc000193970) Go away received\nI1005 17:47:03.810046 2684 log.go:181] (0xc000193970) (0xc000cf28c0) Stream removed, broadcasting: 1\nI1005 17:47:03.810066 2684 log.go:181] (0xc000193970) (0xc000cf20a0) Stream removed, broadcasting: 3\nI1005 17:47:03.810079 2684 log.go:181] (0xc000193970) (0xc000ba0d20) Stream removed, broadcasting: 5\n" Oct 5 17:47:03.815: INFO: stdout: "" Oct 5 17:47:03.815: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:47:03.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2185" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.313 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":196,"skipped":3358,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:47:03.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:47:04.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6" in namespace "projected-9406" to be "Succeeded or Failed" Oct 5 17:47:04.056: INFO: Pod "downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.821797ms Oct 5 17:47:06.060: INFO: Pod "downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02282031s Oct 5 17:47:08.064: INFO: Pod "downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027107451s STEP: Saw pod success Oct 5 17:47:08.064: INFO: Pod "downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6" satisfied condition "Succeeded or Failed" Oct 5 17:47:08.067: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6 container client-container: STEP: delete the pod Oct 5 17:47:08.118: INFO: Waiting for pod downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6 to disappear Oct 5 17:47:08.163: INFO: Pod downwardapi-volume-f4d8a737-0cdd-4522-bb9c-fc73bae5b3c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:47:08.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9406" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:47:08.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:47:08.257: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:47:14.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2688" for this suite. • [SLOW TEST:6.367 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":198,"skipped":3415,"failed":0} [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:47:14.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 17:47:14.673: INFO: Waiting up to 5m0s for pod "downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd" in namespace "downward-api-637" to be "Succeeded or Failed" Oct 5 17:47:14.678: INFO: Pod "downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280271ms Oct 5 17:47:16.741: INFO: Pod "downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067495382s Oct 5 17:47:18.745: INFO: Pod "downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072018739s STEP: Saw pod success Oct 5 17:47:18.745: INFO: Pod "downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd" satisfied condition "Succeeded or Failed" Oct 5 17:47:18.748: INFO: Trying to get logs from node latest-worker pod downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd container dapi-container: STEP: delete the pod Oct 5 17:47:18.813: INFO: Waiting for pod downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd to disappear Oct 5 17:47:18.824: INFO: Pod downward-api-442dedd3-7a80-47ad-97d3-664a0e3aeddd no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:47:18.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-637" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:47:18.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9557 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 17:47:18.919: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 17:47:19.223: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:47:21.391: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:47:23.226: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 17:47:25.226: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:27.235: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:29.227: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:31.227: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:33.228: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:35.229: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:37.230: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:39.227: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 17:47:41.227: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 17:47:41.234: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 17:47:45.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.14:8080/dial?request=hostname&protocol=http&host=10.244.1.27&port=8080&tries=1'] Namespace:pod-network-test-9557 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:47:45.256: INFO: >>> kubeConfig: /root/.kube/config I1005 17:47:45.284526 7 log.go:181] (0xc002b77080) (0xc003564aa0) Create stream I1005 17:47:45.284559 7 log.go:181] (0xc002b77080) (0xc003564aa0) Stream added, broadcasting: 1 I1005 17:47:45.289031 7 log.go:181] (0xc002b77080) Reply frame received for 1 I1005 17:47:45.289069 7 log.go:181] (0xc002b77080) (0xc0001f77c0) Create stream I1005 17:47:45.289088 7 log.go:181] (0xc002b77080) (0xc0001f77c0) Stream added, broadcasting: 3 I1005 17:47:45.290176 7 log.go:181] (0xc002b77080) Reply frame received for 3 I1005 17:47:45.290204 7 log.go:181] (0xc002b77080) (0xc0001f7860) Create stream I1005 17:47:45.290214 7 log.go:181] (0xc002b77080) (0xc0001f7860) Stream added, broadcasting: 5 I1005 17:47:45.291144 7 log.go:181] (0xc002b77080) Reply frame received for 5 I1005 17:47:45.389811 7 log.go:181] (0xc002b77080) Data frame received for 3 I1005 17:47:45.389834 7 log.go:181] (0xc0001f77c0) (3) Data frame handling I1005 17:47:45.389851 7 log.go:181] (0xc0001f77c0) (3) Data frame sent I1005 17:47:45.390676 7 log.go:181] (0xc002b77080) Data frame received for 5 I1005 17:47:45.390689 7 log.go:181] (0xc0001f7860) (5) Data frame handling I1005 17:47:45.391013 7 log.go:181] (0xc002b77080) Data frame received for 3 I1005 17:47:45.391051 7 log.go:181] (0xc0001f77c0) (3) Data frame handling I1005 17:47:45.393343 7 log.go:181] (0xc002b77080) Data frame received for 1 I1005 17:47:45.393360 7 log.go:181] (0xc003564aa0) (1) Data frame handling I1005 17:47:45.393369 7 log.go:181] (0xc003564aa0) (1) Data frame sent I1005 17:47:45.393379 7 log.go:181] (0xc002b77080) (0xc003564aa0) Stream removed, broadcasting: 1 I1005 17:47:45.393398 7 log.go:181] (0xc002b77080) Go away received I1005 17:47:45.393580 7 log.go:181] (0xc002b77080) (0xc003564aa0) Stream removed, broadcasting: 1 I1005 17:47:45.393636 7 log.go:181] (0xc002b77080) (0xc0001f77c0) Stream removed, broadcasting: 3 I1005 17:47:45.393660 7 log.go:181] (0xc002b77080) (0xc0001f7860) Stream removed, broadcasting: 5 Oct 5 17:47:45.393: INFO: Waiting for responses: map[] Oct 5 17:47:45.412: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.14:8080/dial?request=hostname&protocol=http&host=10.244.2.13&port=8080&tries=1'] Namespace:pod-network-test-9557 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:47:45.412: INFO: >>> kubeConfig: /root/.kube/config I1005 17:47:45.447274 7 log.go:181] (0xc002e11d90) (0xc0013f5c20) Create stream I1005 17:47:45.447298 7 log.go:181] (0xc002e11d90) (0xc0013f5c20) Stream added, broadcasting: 1 I1005 17:47:45.449471 7 log.go:181] (0xc002e11d90) Reply frame received for 1 I1005 17:47:45.449538 7 log.go:181] (0xc002e11d90) (0xc0060fdc20) Create stream I1005 17:47:45.449556 7 log.go:181] (0xc002e11d90) (0xc0060fdc20) Stream added, broadcasting: 3 I1005 17:47:45.450722 7 log.go:181] (0xc002e11d90) Reply frame received for 3 I1005 17:47:45.450784 7 log.go:181] (0xc002e11d90) (0xc0001f7900) Create stream I1005 17:47:45.450813 7 log.go:181] (0xc002e11d90) (0xc0001f7900) Stream added, broadcasting: 5 I1005 17:47:45.451955 7 log.go:181] (0xc002e11d90) Reply frame received for 5 I1005 17:47:45.514844 7 log.go:181] (0xc002e11d90) Data frame received for 3 I1005 17:47:45.514866 7 log.go:181] (0xc0060fdc20) (3) Data frame handling I1005 17:47:45.514880 7 log.go:181] (0xc0060fdc20) (3) Data frame sent I1005 17:47:45.515308 7 log.go:181] (0xc002e11d90) Data frame received for 3 I1005 17:47:45.515331 7 log.go:181] (0xc0060fdc20) (3) Data frame handling I1005 17:47:45.515352 7 log.go:181] (0xc002e11d90) Data frame received for 5 I1005 17:47:45.515359 7 log.go:181] (0xc0001f7900) (5) Data frame handling I1005 17:47:45.516631 7 log.go:181] (0xc002e11d90) Data frame received for 1 I1005 17:47:45.516663 7 log.go:181] (0xc0013f5c20) (1) Data frame handling I1005 17:47:45.516685 7 log.go:181] (0xc0013f5c20) (1) Data frame sent I1005 17:47:45.516713 7 log.go:181] (0xc002e11d90) (0xc0013f5c20) Stream removed, broadcasting: 1 I1005 17:47:45.516735 7 log.go:181] (0xc002e11d90) Go away received I1005 17:47:45.516901 7 log.go:181] (0xc002e11d90) (0xc0013f5c20) Stream removed, broadcasting: 1 I1005 17:47:45.516934 7 log.go:181] (0xc002e11d90) (0xc0060fdc20) Stream removed, broadcasting: 3 I1005 17:47:45.516946 7 log.go:181] (0xc002e11d90) (0xc0001f7900) Stream removed, broadcasting: 5 Oct 5 17:47:45.516: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:47:45.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9557" for this suite. • [SLOW TEST:26.698 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3454,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:47:45.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-5e8e2814-b118-4003-a300-230aaa4acd1e STEP: Creating secret with name s-test-opt-upd-e457a4cb-d487-4881-a48e-62960dd870bd STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5e8e2814-b118-4003-a300-230aaa4acd1e STEP: Updating secret s-test-opt-upd-e457a4cb-d487-4881-a48e-62960dd870bd STEP: Creating secret with name s-test-opt-create-f7ab3758-d43e-4f73-975a-f4ac481f41f7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:49:10.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9011" for this suite. • [SLOW TEST:85.102 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":201,"skipped":3458,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:49:10.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-pz5q STEP: Creating a pod to test atomic-volume-subpath Oct 5 17:49:10.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pz5q" in namespace "subpath-660" to be "Succeeded or Failed" Oct 5 17:49:10.809: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702439ms Oct 5 17:49:12.814: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008819142s Oct 5 17:49:14.818: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 4.012616891s Oct 5 17:49:16.823: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 6.017609648s Oct 5 17:49:18.828: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 8.023092473s Oct 5 17:49:20.832: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 10.027255339s Oct 5 17:49:22.837: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 12.031888928s Oct 5 17:49:24.842: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 14.03726685s Oct 5 17:49:26.847: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 16.041974434s Oct 5 17:49:28.851: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 18.045540396s Oct 5 17:49:30.856: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 20.051025847s Oct 5 17:49:32.861: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 22.056203039s Oct 5 17:49:34.866: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Running", Reason="", readiness=true. Elapsed: 24.06104499s Oct 5 17:49:36.871: INFO: Pod "pod-subpath-test-secret-pz5q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.0660972s STEP: Saw pod success Oct 5 17:49:36.871: INFO: Pod "pod-subpath-test-secret-pz5q" satisfied condition "Succeeded or Failed" Oct 5 17:49:36.874: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-pz5q container test-container-subpath-secret-pz5q: STEP: delete the pod Oct 5 17:49:36.913: INFO: Waiting for pod pod-subpath-test-secret-pz5q to disappear Oct 5 17:49:36.930: INFO: Pod pod-subpath-test-secret-pz5q no longer exists STEP: Deleting pod pod-subpath-test-secret-pz5q Oct 5 17:49:36.930: INFO: Deleting pod "pod-subpath-test-secret-pz5q" in namespace "subpath-660" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:49:36.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-660" for this suite. • [SLOW TEST:26.308 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":202,"skipped":3463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:49:36.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:49:37.032: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:49:38.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2963" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":203,"skipped":3514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:49:38.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 17:49:41.403: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:49:41.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3692" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3542,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:49:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:49:41.693: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 5 17:49:46.696: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 17:49:46.696: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 5 17:49:48.701: INFO: Creating deployment "test-rollover-deployment" Oct 5 17:49:48.711: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 5 17:49:50.719: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 5 17:49:50.725: INFO: Ensure that both replica sets have 1 created replica Oct 5 17:49:50.730: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 5 17:49:50.737: INFO: Updating deployment test-rollover-deployment Oct 5 17:49:50.737: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 5 17:49:52.792: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 5 17:49:52.799: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 5 17:49:52.805: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:49:52.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516991, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:49:54.814: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:49:54.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:49:56.815: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:49:56.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:49:58.811: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:49:58.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:50:00.816: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:50:00.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:50:02.815: INFO: all replica sets need to contain the pod-template-hash label Oct 5 17:50:02.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516994, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737516988, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:50:04.840: INFO: Oct 5 17:50:04.840: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 17:50:04.848: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9717 /apis/apps/v1/namespaces/deployment-9717/deployments/test-rollover-deployment cbb48b39-02d3-480c-81c0-a9982152dc9d 3411365 2 2020-10-05 17:49:48 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 17:49:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0044175f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-05 17:49:48 +0000 UTC,LastTransitionTime:2020-10-05 17:49:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-10-05 17:50:04 +0000 UTC,LastTransitionTime:2020-10-05 17:49:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 5 17:50:04.852: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-9717 /apis/apps/v1/namespaces/deployment-9717/replicasets/test-rollover-deployment-5797c7764 7378c28e-3069-49a5-bf0e-e14b43d0ccf1 3411354 2 2020-10-05 17:49:50 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cbb48b39-02d3-480c-81c0-a9982152dc9d 0xc004417ae0 0xc004417ae1}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbb48b39-02d3-480c-81c0-a9982152dc9d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004417b58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:50:04.852: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 5 17:50:04.852: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9717 /apis/apps/v1/namespaces/deployment-9717/replicasets/test-rollover-controller 41bfe4bb-e8a6-4c7a-8057-29ccdb9d303f 3411364 2 2020-10-05 17:49:41 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cbb48b39-02d3-480c-81c0-a9982152dc9d 0xc0044179cf 0xc0044179e0}] [] [{e2e.test Update apps/v1 2020-10-05 17:49:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 17:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbb48b39-02d3-480c-81c0-a9982152dc9d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004417a78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:50:04.852: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9717 /apis/apps/v1/namespaces/deployment-9717/replicasets/test-rollover-deployment-78bc8b888c 8401cb0c-bcd2-4963-a4b4-00a961fe0032 3411306 2 2020-10-05 17:49:48 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cbb48b39-02d3-480c-81c0-a9982152dc9d 0xc004417bc7 0xc004417bc8}] [] [{kube-controller-manager Update apps/v1 2020-10-05 17:49:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbb48b39-02d3-480c-81c0-a9982152dc9d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004417c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 17:50:04.855: INFO: Pod "test-rollover-deployment-5797c7764-9772q" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-9772q test-rollover-deployment-5797c7764- deployment-9717 /api/v1/namespaces/deployment-9717/pods/test-rollover-deployment-5797c7764-9772q 181cbe78-13f0-4cb9-9951-7f41eb38a920 3411322 0 2020-10-05 17:49:50 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 7378c28e-3069-49a5-bf0e-e14b43d0ccf1 0xc0079f22d0 0xc0079f22d1}] [] [{kube-controller-manager Update v1 2020-10-05 17:49:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7378c28e-3069-49a5-bf0e-e14b43d0ccf1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 17:49:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xcln5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xcln5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xcln5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:49:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:49:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 17:49:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.31,StartTime:2020-10-05 17:49:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 17:49:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://83644444e65f1861268b6bea93db5ff1aab91e8102f85fffd0a804ac6e1a047b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:04.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9717" for this suite. • [SLOW TEST:23.351 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":205,"skipped":3556,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:04.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Oct 5 17:50:05.004: INFO: Waiting up to 5m0s for pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f" in namespace "containers-4247" to be "Succeeded or Failed" Oct 5 17:50:05.014: INFO: Pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035108ms Oct 5 17:50:07.019: INFO: Pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015336844s Oct 5 17:50:09.024: INFO: Pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f": Phase="Running", Reason="", readiness=true. Elapsed: 4.019432753s Oct 5 17:50:11.620: INFO: Pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.615736795s STEP: Saw pod success Oct 5 17:50:11.620: INFO: Pod "client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f" satisfied condition "Succeeded or Failed" Oct 5 17:50:11.623: INFO: Trying to get logs from node latest-worker pod client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f container test-container: STEP: delete the pod Oct 5 17:50:11.799: INFO: Waiting for pod client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f to disappear Oct 5 17:50:11.848: INFO: Pod client-containers-7cea6f22-ba96-40b1-96bb-6cf6242c728f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:11.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4247" for this suite. • [SLOW TEST:7.327 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:12.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-090a79ac-eada-4c6c-b9e0-ed77e17ead65 STEP: Creating a pod to test consume secrets Oct 5 17:50:12.437: INFO: Waiting up to 5m0s for pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1" in namespace "secrets-1952" to be "Succeeded or Failed" Oct 5 17:50:12.536: INFO: Pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1": Phase="Pending", Reason="", readiness=false. Elapsed: 99.065049ms Oct 5 17:50:14.541: INFO: Pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103880132s Oct 5 17:50:16.669: INFO: Pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.231648772s Oct 5 17:50:18.673: INFO: Pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.236367086s STEP: Saw pod success Oct 5 17:50:18.673: INFO: Pod "pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1" satisfied condition "Succeeded or Failed" Oct 5 17:50:18.676: INFO: Trying to get logs from node latest-worker pod pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1 container secret-volume-test: STEP: delete the pod Oct 5 17:50:18.705: INFO: Waiting for pod pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1 to disappear Oct 5 17:50:18.757: INFO: Pod pod-secrets-be970700-ad59-4ad4-ab59-f5f529d058e1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:18.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1952" for this suite. STEP: Destroying namespace "secret-namespace-4164" for this suite. • [SLOW TEST:6.592 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:18.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Oct 5 17:50:18.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config cluster-info' Oct 5 17:50:21.892: INFO: stderr: "" Oct 5 17:50:21.892: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35633\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35633/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:21.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2727" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":208,"skipped":3618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:21.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-1fee9975-1bdd-413b-8e48-f6d6eeed1e36 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-1fee9975-1bdd-413b-8e48-f6d6eeed1e36 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:28.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6262" for this suite. • [SLOW TEST:6.189 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:28.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Oct 5 17:50:28.167: INFO: Waiting up to 5m0s for pod "pod-4e9886b2-5189-4124-960b-ca234cd347a1" in namespace "emptydir-15" to be "Succeeded or Failed" Oct 5 17:50:28.171: INFO: Pod "pod-4e9886b2-5189-4124-960b-ca234cd347a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120897ms Oct 5 17:50:30.175: INFO: Pod "pod-4e9886b2-5189-4124-960b-ca234cd347a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008038406s Oct 5 17:50:32.181: INFO: Pod "pod-4e9886b2-5189-4124-960b-ca234cd347a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014184182s STEP: Saw pod success Oct 5 17:50:32.181: INFO: Pod "pod-4e9886b2-5189-4124-960b-ca234cd347a1" satisfied condition "Succeeded or Failed" Oct 5 17:50:32.184: INFO: Trying to get logs from node latest-worker pod pod-4e9886b2-5189-4124-960b-ca234cd347a1 container test-container: STEP: delete the pod Oct 5 17:50:32.211: INFO: Waiting for pod pod-4e9886b2-5189-4124-960b-ca234cd347a1 to disappear Oct 5 17:50:32.227: INFO: Pod pod-4e9886b2-5189-4124-960b-ca234cd347a1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-15" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3676,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:32.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:32.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1343" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":211,"skipped":3680,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:32.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4274 STEP: creating service affinity-nodeport-transition in namespace services-4274 STEP: creating replication controller affinity-nodeport-transition in namespace services-4274 I1005 17:50:32.695994 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4274, replica count: 3 I1005 17:50:35.746454 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 17:50:38.746719 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 17:50:38.759: INFO: Creating new exec pod Oct 5 17:50:43.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Oct 5 17:50:44.070: INFO: stderr: "I1005 17:50:43.961336 2720 log.go:181] (0xc000c88e70) (0xc000c28500) Create stream\nI1005 17:50:43.961387 2720 log.go:181] (0xc000c88e70) (0xc000c28500) Stream added, broadcasting: 1\nI1005 17:50:43.964121 2720 log.go:181] (0xc000c88e70) Reply frame received for 1\nI1005 17:50:43.964176 2720 log.go:181] (0xc000c88e70) (0xc000c803c0) Create stream\nI1005 17:50:43.964198 2720 log.go:181] (0xc000c88e70) (0xc000c803c0) Stream added, broadcasting: 3\nI1005 17:50:43.965694 2720 log.go:181] (0xc000c88e70) Reply frame received for 3\nI1005 17:50:43.965749 2720 log.go:181] (0xc000c88e70) (0xc0005623c0) Create stream\nI1005 17:50:43.965779 2720 log.go:181] (0xc000c88e70) (0xc0005623c0) Stream added, broadcasting: 5\nI1005 17:50:43.966965 2720 log.go:181] (0xc000c88e70) Reply frame received for 5\nI1005 17:50:44.062288 2720 log.go:181] (0xc000c88e70) Data frame received for 5\nI1005 17:50:44.062323 2720 log.go:181] (0xc0005623c0) (5) Data frame handling\nI1005 17:50:44.062339 2720 log.go:181] (0xc0005623c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI1005 17:50:44.063179 2720 log.go:181] (0xc000c88e70) Data frame received for 5\nI1005 17:50:44.063201 2720 log.go:181] (0xc0005623c0) (5) Data frame handling\nI1005 17:50:44.063224 2720 log.go:181] (0xc0005623c0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1005 17:50:44.063361 2720 log.go:181] (0xc000c88e70) Data frame received for 5\nI1005 17:50:44.063396 2720 log.go:181] (0xc0005623c0) (5) Data frame handling\nI1005 17:50:44.063584 2720 log.go:181] (0xc000c88e70) Data frame received for 3\nI1005 17:50:44.063596 2720 log.go:181] (0xc000c803c0) (3) Data frame handling\nI1005 17:50:44.065345 2720 log.go:181] (0xc000c88e70) Data frame received for 1\nI1005 17:50:44.065383 2720 log.go:181] (0xc000c28500) (1) Data frame handling\nI1005 17:50:44.065405 2720 log.go:181] (0xc000c28500) (1) Data frame sent\nI1005 17:50:44.065435 2720 log.go:181] (0xc000c88e70) (0xc000c28500) Stream removed, broadcasting: 1\nI1005 17:50:44.065476 2720 log.go:181] (0xc000c88e70) Go away received\nI1005 17:50:44.065915 2720 log.go:181] (0xc000c88e70) (0xc000c28500) Stream removed, broadcasting: 1\nI1005 17:50:44.065940 2720 log.go:181] (0xc000c88e70) (0xc000c803c0) Stream removed, broadcasting: 3\nI1005 17:50:44.065952 2720 log.go:181] (0xc000c88e70) (0xc0005623c0) Stream removed, broadcasting: 5\n" Oct 5 17:50:44.070: INFO: stdout: "" Oct 5 17:50:44.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c nc -zv -t -w 2 10.98.215.161 80' Oct 5 17:50:44.252: INFO: stderr: "I1005 17:50:44.190386 2738 log.go:181] (0xc00003b8c0) (0xc0001448c0) Create stream\nI1005 17:50:44.190467 2738 log.go:181] (0xc00003b8c0) (0xc0001448c0) Stream added, broadcasting: 1\nI1005 17:50:44.192906 2738 log.go:181] (0xc00003b8c0) Reply frame received for 1\nI1005 17:50:44.192954 2738 log.go:181] (0xc00003b8c0) (0xc000c82500) Create stream\nI1005 17:50:44.192972 2738 log.go:181] (0xc00003b8c0) (0xc000c82500) Stream added, broadcasting: 3\nI1005 17:50:44.193718 2738 log.go:181] (0xc00003b8c0) Reply frame received for 3\nI1005 17:50:44.193744 2738 log.go:181] (0xc00003b8c0) (0xc000c825a0) Create stream\nI1005 17:50:44.193751 2738 log.go:181] (0xc00003b8c0) (0xc000c825a0) Stream added, broadcasting: 5\nI1005 17:50:44.194393 2738 log.go:181] (0xc00003b8c0) Reply frame received for 5\nI1005 17:50:44.245720 2738 log.go:181] (0xc00003b8c0) Data frame received for 3\nI1005 17:50:44.245751 2738 log.go:181] (0xc000c82500) (3) Data frame handling\nI1005 17:50:44.246009 2738 log.go:181] (0xc00003b8c0) Data frame received for 5\nI1005 17:50:44.246047 2738 log.go:181] (0xc000c825a0) (5) Data frame handling\nI1005 17:50:44.246077 2738 log.go:181] (0xc000c825a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.215.161 80\nConnection to 10.98.215.161 80 port [tcp/http] succeeded!\nI1005 17:50:44.246227 2738 log.go:181] (0xc00003b8c0) Data frame received for 5\nI1005 17:50:44.246253 2738 log.go:181] (0xc000c825a0) (5) Data frame handling\nI1005 17:50:44.247749 2738 log.go:181] (0xc00003b8c0) Data frame received for 1\nI1005 17:50:44.247774 2738 log.go:181] (0xc0001448c0) (1) Data frame handling\nI1005 17:50:44.247933 2738 log.go:181] (0xc0001448c0) (1) Data frame sent\nI1005 17:50:44.247955 2738 log.go:181] (0xc00003b8c0) (0xc0001448c0) Stream removed, broadcasting: 1\nI1005 17:50:44.247977 2738 log.go:181] (0xc00003b8c0) Go away received\nI1005 17:50:44.248365 2738 log.go:181] (0xc00003b8c0) (0xc0001448c0) Stream removed, broadcasting: 1\nI1005 17:50:44.248378 2738 log.go:181] (0xc00003b8c0) (0xc000c82500) Stream removed, broadcasting: 3\nI1005 17:50:44.248383 2738 log.go:181] (0xc00003b8c0) (0xc000c825a0) Stream removed, broadcasting: 5\n" Oct 5 17:50:44.253: INFO: stdout: "" Oct 5 17:50:44.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30144' Oct 5 17:50:44.506: INFO: stderr: "I1005 17:50:44.411959 2755 log.go:181] (0xc000b2b080) (0xc000a22820) Create stream\nI1005 17:50:44.412008 2755 log.go:181] (0xc000b2b080) (0xc000a22820) Stream added, broadcasting: 1\nI1005 17:50:44.416674 2755 log.go:181] (0xc000b2b080) Reply frame received for 1\nI1005 17:50:44.416730 2755 log.go:181] (0xc000b2b080) (0xc000cbe0a0) Create stream\nI1005 17:50:44.416749 2755 log.go:181] (0xc000b2b080) (0xc000cbe0a0) Stream added, broadcasting: 3\nI1005 17:50:44.417708 2755 log.go:181] (0xc000b2b080) Reply frame received for 3\nI1005 17:50:44.417744 2755 log.go:181] (0xc000b2b080) (0xc000a22000) Create stream\nI1005 17:50:44.417752 2755 log.go:181] (0xc000b2b080) (0xc000a22000) Stream added, broadcasting: 5\nI1005 17:50:44.418521 2755 log.go:181] (0xc000b2b080) Reply frame received for 5\nI1005 17:50:44.496467 2755 log.go:181] (0xc000b2b080) Data frame received for 5\nI1005 17:50:44.496505 2755 log.go:181] (0xc000a22000) (5) Data frame handling\nI1005 17:50:44.496532 2755 log.go:181] (0xc000a22000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30144\nI1005 17:50:44.497050 2755 log.go:181] (0xc000b2b080) Data frame received for 5\nI1005 17:50:44.497089 2755 log.go:181] (0xc000a22000) (5) Data frame handling\nI1005 17:50:44.497111 2755 log.go:181] (0xc000a22000) (5) Data frame sent\nConnection to 172.18.0.15 30144 port [tcp/30144] succeeded!\nI1005 17:50:44.497309 2755 log.go:181] (0xc000b2b080) Data frame received for 5\nI1005 17:50:44.497381 2755 log.go:181] (0xc000a22000) (5) Data frame handling\nI1005 17:50:44.497668 2755 log.go:181] (0xc000b2b080) Data frame received for 3\nI1005 17:50:44.497686 2755 log.go:181] (0xc000cbe0a0) (3) Data frame handling\nI1005 17:50:44.499541 2755 log.go:181] (0xc000b2b080) Data frame received for 1\nI1005 17:50:44.499567 2755 log.go:181] (0xc000a22820) (1) Data frame handling\nI1005 17:50:44.499579 2755 log.go:181] (0xc000a22820) (1) Data frame sent\nI1005 17:50:44.499589 2755 log.go:181] (0xc000b2b080) (0xc000a22820) Stream removed, broadcasting: 1\nI1005 17:50:44.499656 2755 log.go:181] (0xc000b2b080) Go away received\nI1005 17:50:44.499915 2755 log.go:181] (0xc000b2b080) (0xc000a22820) Stream removed, broadcasting: 1\nI1005 17:50:44.499931 2755 log.go:181] (0xc000b2b080) (0xc000cbe0a0) Stream removed, broadcasting: 3\nI1005 17:50:44.499937 2755 log.go:181] (0xc000b2b080) (0xc000a22000) Stream removed, broadcasting: 5\n" Oct 5 17:50:44.506: INFO: stdout: "" Oct 5 17:50:44.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30144' Oct 5 17:50:44.740: INFO: stderr: "I1005 17:50:44.661225 2774 log.go:181] (0xc0000b0000) (0xc0008ec000) Create stream\nI1005 17:50:44.661288 2774 log.go:181] (0xc0000b0000) (0xc0008ec000) Stream added, broadcasting: 1\nI1005 17:50:44.663383 2774 log.go:181] (0xc0000b0000) Reply frame received for 1\nI1005 17:50:44.663426 2774 log.go:181] (0xc0000b0000) (0xc000150a00) Create stream\nI1005 17:50:44.663440 2774 log.go:181] (0xc0000b0000) (0xc000150a00) Stream added, broadcasting: 3\nI1005 17:50:44.664513 2774 log.go:181] (0xc0000b0000) Reply frame received for 3\nI1005 17:50:44.664548 2774 log.go:181] (0xc0000b0000) (0xc000d215e0) Create stream\nI1005 17:50:44.664560 2774 log.go:181] (0xc0000b0000) (0xc000d215e0) Stream added, broadcasting: 5\nI1005 17:50:44.666101 2774 log.go:181] (0xc0000b0000) Reply frame received for 5\nI1005 17:50:44.731804 2774 log.go:181] (0xc0000b0000) Data frame received for 5\nI1005 17:50:44.731851 2774 log.go:181] (0xc000d215e0) (5) Data frame handling\nI1005 17:50:44.731865 2774 log.go:181] (0xc000d215e0) (5) Data frame sent\nI1005 17:50:44.731881 2774 log.go:181] (0xc0000b0000) Data frame received for 5\nI1005 17:50:44.731896 2774 log.go:181] (0xc000d215e0) (5) Data frame handling\nI1005 17:50:44.731910 2774 log.go:181] (0xc0000b0000) Data frame received for 3\nI1005 17:50:44.731921 2774 log.go:181] (0xc000150a00) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30144\nConnection to 172.18.0.16 30144 port [tcp/30144] succeeded!\nI1005 17:50:44.733785 2774 log.go:181] (0xc0000b0000) Data frame received for 1\nI1005 17:50:44.733809 2774 log.go:181] (0xc0008ec000) (1) Data frame handling\nI1005 17:50:44.733833 2774 log.go:181] (0xc0008ec000) (1) Data frame sent\nI1005 17:50:44.733863 2774 log.go:181] (0xc0000b0000) (0xc0008ec000) Stream removed, broadcasting: 1\nI1005 17:50:44.733890 2774 log.go:181] (0xc0000b0000) Go away received\nI1005 17:50:44.734455 2774 log.go:181] (0xc0000b0000) (0xc0008ec000) Stream removed, broadcasting: 1\nI1005 17:50:44.734489 2774 log.go:181] (0xc0000b0000) (0xc000150a00) Stream removed, broadcasting: 3\nI1005 17:50:44.734502 2774 log.go:181] (0xc0000b0000) (0xc000d215e0) Stream removed, broadcasting: 5\n" Oct 5 17:50:44.740: INFO: stdout: "" Oct 5 17:50:44.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:30144/ ; done' Oct 5 17:50:45.080: INFO: stderr: "I1005 17:50:44.898979 2792 log.go:181] (0xc0000f6000) (0xc000bd4320) Create stream\nI1005 17:50:44.899053 2792 log.go:181] (0xc0000f6000) (0xc000bd4320) Stream added, broadcasting: 1\nI1005 17:50:44.900737 2792 log.go:181] (0xc0000f6000) Reply frame received for 1\nI1005 17:50:44.900783 2792 log.go:181] (0xc0000f6000) (0xc000bd43c0) Create stream\nI1005 17:50:44.900801 2792 log.go:181] (0xc0000f6000) (0xc000bd43c0) Stream added, broadcasting: 3\nI1005 17:50:44.901870 2792 log.go:181] (0xc0000f6000) Reply frame received for 3\nI1005 17:50:44.901915 2792 log.go:181] (0xc0000f6000) (0xc000209d60) Create stream\nI1005 17:50:44.901926 2792 log.go:181] (0xc0000f6000) (0xc000209d60) Stream added, broadcasting: 5\nI1005 17:50:44.902588 2792 log.go:181] (0xc0000f6000) Reply frame received for 5\nI1005 17:50:44.970437 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.970488 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:44.970505 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:44.970526 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.970542 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.970559 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.977872 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.977895 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.977913 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.978970 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.979011 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.979028 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.979052 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.979074 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:44.979104 2792 log.go:181] (0xc000209d60) (5) Data frame sent\nI1005 17:50:44.979122 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.979132 2792 log.go:181] (0xc000209d60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:44.979154 2792 log.go:181] (0xc000209d60) (5) Data frame sent\nI1005 17:50:44.983858 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.983889 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.983902 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.984246 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.984259 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.984270 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.984291 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:44.984301 2792 log.go:181] (0xc000209d60) (5) Data frame sent\nI1005 17:50:44.984309 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.984315 2792 log.go:181] (0xc000209d60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:44.984330 2792 log.go:181] (0xc000209d60) (5) Data frame sent\nI1005 17:50:44.984338 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.988213 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.988238 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.988257 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.989038 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.989056 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:44.989062 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:44.989087 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.989114 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.989131 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.994365 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.994379 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.994387 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.995272 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.995307 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.995320 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:44.995340 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:44.995352 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:44.995363 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:44.999824 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:44.999846 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:44.999869 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.000548 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.000581 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.000595 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.000618 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.000646 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.000675 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.005566 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.005584 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.005597 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.006519 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.006574 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.006602 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.006635 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.006671 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.006716 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.013164 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.013193 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.013216 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.013991 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.014011 2792 log.go:181] (0xc000209d60) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.014028 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.014041 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.014050 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.014058 2792 log.go:181] (0xc000209d60) (5) Data frame sent\nI1005 17:50:45.018345 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.018364 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.018379 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.018999 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.019046 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.019070 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.019124 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.019148 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.019188 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.025970 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.025992 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.026008 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.026771 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.026792 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.026802 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.026817 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.026831 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.026843 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.034222 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.034252 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.034272 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.034949 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.034962 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.034967 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.034992 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.035014 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.035032 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.038252 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.038278 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.038300 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.039101 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.039123 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.039143 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.039164 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.039196 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.039216 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.044828 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.044987 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.045035 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.045278 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.045301 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.045319 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.045540 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.045576 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.045608 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.051514 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.051540 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.051561 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.051931 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.051950 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.051965 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.051998 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.052008 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.052015 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.058668 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.058698 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.058721 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.059268 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.059283 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.059294 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.059309 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.059327 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.059339 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.064115 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.064144 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.064164 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.065225 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.065243 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.065253 2792 log.go:181] (0xc000209d60) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.065271 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.065299 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.065319 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.070098 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.070124 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.070153 2792 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI1005 17:50:45.071279 2792 log.go:181] (0xc0000f6000) Data frame received for 3\nI1005 17:50:45.071316 2792 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI1005 17:50:45.071346 2792 log.go:181] (0xc0000f6000) Data frame received for 5\nI1005 17:50:45.071364 2792 log.go:181] (0xc000209d60) (5) Data frame handling\nI1005 17:50:45.073387 2792 log.go:181] (0xc0000f6000) Data frame received for 1\nI1005 17:50:45.073412 2792 log.go:181] (0xc000bd4320) (1) Data frame handling\nI1005 17:50:45.073428 2792 log.go:181] (0xc000bd4320) (1) Data frame sent\nI1005 17:50:45.073444 2792 log.go:181] (0xc0000f6000) (0xc000bd4320) Stream removed, broadcasting: 1\nI1005 17:50:45.073468 2792 log.go:181] (0xc0000f6000) Go away received\nI1005 17:50:45.074079 2792 log.go:181] (0xc0000f6000) (0xc000bd4320) Stream removed, broadcasting: 1\nI1005 17:50:45.074105 2792 log.go:181] (0xc0000f6000) (0xc000bd43c0) Stream removed, broadcasting: 3\nI1005 17:50:45.074117 2792 log.go:181] (0xc0000f6000) (0xc000209d60) Stream removed, broadcasting: 5\n" Oct 5 17:50:45.081: INFO: stdout: "\naffinity-nodeport-transition-65wrs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-65wrs\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-65wrs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-nftk2\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-65wrs\naffinity-nodeport-transition-nftk2" Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-65wrs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-65wrs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-65wrs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-65wrs Oct 5 17:50:45.081: INFO: Received response from host: affinity-nodeport-transition-nftk2 Oct 5 17:50:45.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-4274 execpod-affinitygfhnz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:30144/ ; done' Oct 5 17:50:45.400: INFO: stderr: "I1005 17:50:45.236620 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4a00) Create stream\nI1005 17:50:45.236715 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4a00) Stream added, broadcasting: 1\nI1005 17:50:45.243993 2810 log.go:181] (0xc0005dd3f0) Reply frame received for 1\nI1005 17:50:45.244060 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4000) Create stream\nI1005 17:50:45.244083 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4000) Stream added, broadcasting: 3\nI1005 17:50:45.245328 2810 log.go:181] (0xc0005dd3f0) Reply frame received for 3\nI1005 17:50:45.245390 2810 log.go:181] (0xc0005dd3f0) (0xc000c14000) Create stream\nI1005 17:50:45.245415 2810 log.go:181] (0xc0005dd3f0) (0xc000c14000) Stream added, broadcasting: 5\nI1005 17:50:45.246369 2810 log.go:181] (0xc0005dd3f0) Reply frame received for 5\nI1005 17:50:45.296245 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.296273 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.296281 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.296298 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.296303 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.296308 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.299265 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.299280 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.299286 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.299654 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.299675 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.299680 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.299700 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.299721 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.299740 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.306192 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.306211 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.306225 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.306990 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.307030 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.307046 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.307068 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.307085 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.307106 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.311476 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.311497 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.311512 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.311900 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.311936 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.311951 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.311970 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.311980 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.311988 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.318097 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.318125 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.318145 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.318175 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.318190 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.318209 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.318223 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.318233 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.318253 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.322619 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.322636 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.322649 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.323011 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.323042 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.323061 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.323082 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.323096 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.323110 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.326928 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.326942 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.326958 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.327484 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.327505 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.327524 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.327591 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.327609 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.327621 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.332664 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.332686 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.332711 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.333320 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.333340 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.333348 2810 log.go:181] (0xc000c14000) (5) Data frame sent\nI1005 17:50:45.333360 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.333375 2810 log.go:181] (0xc000c14000) (5) Data frame handling\n+ echo\nI1005 17:50:45.333390 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.333402 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.333413 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.333433 2810 log.go:181] (0xc000c14000) (5) Data frame sent\nI1005 17:50:45.337914 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.337933 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.337948 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.338385 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.338413 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.338426 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.338441 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.338452 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.338463 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.344209 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.344248 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.344279 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.344758 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.344776 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.344793 2810 log.go:181] (0xc000c14000) (5) Data frame sent\nI1005 17:50:45.344799 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.344804 2810 log.go:181] (0xc000c14000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.344818 2810 log.go:181] (0xc000c14000) (5) Data frame sent\nI1005 17:50:45.344831 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.344930 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.344940 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.352153 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.352174 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.352194 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.353101 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.353129 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.353155 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.353195 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.353237 2810 log.go:181] (0xc000c14000) (5) Data frame sent\nI1005 17:50:45.353262 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I1005 17:50:45.353286 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.353308 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n http://172.18.0.15:30144/\nI1005 17:50:45.353330 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.359776 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.359799 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.359828 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.360109 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.360127 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.360139 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.360210 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.360239 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.360275 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.365327 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.365350 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.365369 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.365636 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.365655 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.365671 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.368784 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.368800 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.368813 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.369871 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.369887 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.369899 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.370844 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.370870 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.370884 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.373312 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.373342 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.373366 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.376307 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.376320 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.376327 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.377223 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.377238 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.377245 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.377276 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.377304 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.377326 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.381999 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.382026 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.382061 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.382492 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.382517 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.382552 2810 log.go:181] (0xc000c14000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30144/\nI1005 17:50:45.382573 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.382582 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.382595 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.389393 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.389416 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.389445 2810 log.go:181] (0xc0005d4000) (3) Data frame sent\nI1005 17:50:45.390329 2810 log.go:181] (0xc0005dd3f0) Data frame received for 3\nI1005 17:50:45.390344 2810 log.go:181] (0xc0005d4000) (3) Data frame handling\nI1005 17:50:45.390908 2810 log.go:181] (0xc0005dd3f0) Data frame received for 5\nI1005 17:50:45.390940 2810 log.go:181] (0xc000c14000) (5) Data frame handling\nI1005 17:50:45.393398 2810 log.go:181] (0xc0005dd3f0) Data frame received for 1\nI1005 17:50:45.393421 2810 log.go:181] (0xc0005d4a00) (1) Data frame handling\nI1005 17:50:45.393433 2810 log.go:181] (0xc0005d4a00) (1) Data frame sent\nI1005 17:50:45.393446 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4a00) Stream removed, broadcasting: 1\nI1005 17:50:45.393525 2810 log.go:181] (0xc0005dd3f0) Go away received\nI1005 17:50:45.393826 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4a00) Stream removed, broadcasting: 1\nI1005 17:50:45.393857 2810 log.go:181] (0xc0005dd3f0) (0xc0005d4000) Stream removed, broadcasting: 3\nI1005 17:50:45.393874 2810 log.go:181] (0xc0005dd3f0) (0xc000c14000) Stream removed, broadcasting: 5\n" Oct 5 17:50:45.400: INFO: stdout: "\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs\naffinity-nodeport-transition-94zxs" Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Received response from host: affinity-nodeport-transition-94zxs Oct 5 17:50:45.401: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4274, will wait for the garbage collector to delete the pods Oct 5 17:50:45.503: INFO: Deleting ReplicationController affinity-nodeport-transition took: 8.190041ms Oct 5 17:50:48.303: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 2.800191731s [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:50:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4274" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:27.492 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":212,"skipped":3706,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:50:59.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:51:00.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 5 17:51:00.619: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:00Z]] name:name1 resourceVersion:3411803 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c865ee35-f2cc-4bb6-b7ec-bd34a8cf26b2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 5 17:51:10.626: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:10Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:10Z]] name:name2 resourceVersion:3411860 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c25e85db-de95-4c02-b648-ef15c6395362] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 5 17:51:20.635: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:20Z]] name:name1 resourceVersion:3411890 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c865ee35-f2cc-4bb6-b7ec-bd34a8cf26b2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 5 17:51:30.642: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:30Z]] name:name2 resourceVersion:3411920 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c25e85db-de95-4c02-b648-ef15c6395362] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 5 17:51:40.651: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:20Z]] name:name1 resourceVersion:3411950 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c865ee35-f2cc-4bb6-b7ec-bd34a8cf26b2] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 5 17:51:50.660: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T17:51:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T17:51:30Z]] name:name2 resourceVersion:3411982 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c25e85db-de95-4c02-b648-ef15c6395362] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:01.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-960" for this suite. • [SLOW TEST:61.239 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":213,"skipped":3720,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:01.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 5 17:52:01.235: INFO: namespace kubectl-8653 Oct 5 17:52:01.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8653' Oct 5 17:52:01.596: INFO: stderr: "" Oct 5 17:52:01.596: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 17:52:02.601: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:52:02.601: INFO: Found 0 / 1 Oct 5 17:52:03.601: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:52:03.601: INFO: Found 0 / 1 Oct 5 17:52:04.602: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:52:04.602: INFO: Found 0 / 1 Oct 5 17:52:05.601: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:52:05.601: INFO: Found 1 / 1 Oct 5 17:52:05.601: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 5 17:52:05.605: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 17:52:05.605: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 17:52:05.605: INFO: wait on agnhost-primary startup in kubectl-8653 Oct 5 17:52:05.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config logs agnhost-primary-zmbw4 agnhost-primary --namespace=kubectl-8653' Oct 5 17:52:05.745: INFO: stderr: "" Oct 5 17:52:05.745: INFO: stdout: "Paused\n" STEP: exposing RC Oct 5 17:52:05.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8653' Oct 5 17:52:05.898: INFO: stderr: "" Oct 5 17:52:05.898: INFO: stdout: "service/rm2 exposed\n" Oct 5 17:52:05.913: INFO: Service rm2 in namespace kubectl-8653 found. STEP: exposing service Oct 5 17:52:07.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8653' Oct 5 17:52:08.113: INFO: stderr: "" Oct 5 17:52:08.113: INFO: stdout: "service/rm3 exposed\n" Oct 5 17:52:08.121: INFO: Service rm3 in namespace kubectl-8653 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:10.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8653" for this suite. • [SLOW TEST:8.951 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":214,"skipped":3724,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:10.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:22.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8164" for this suite. • [SLOW TEST:12.473 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":215,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:22.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:29.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3999" for this suite. • [SLOW TEST:7.103 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":216,"skipped":3776,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:29.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 5 17:52:29.797: INFO: Waiting up to 5m0s for pod "pod-1c5116c2-1279-41cf-8243-4afdae13d0d5" in namespace "emptydir-740" to be "Succeeded or Failed" Oct 5 17:52:29.818: INFO: Pod "pod-1c5116c2-1279-41cf-8243-4afdae13d0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.35578ms Oct 5 17:52:31.824: INFO: Pod "pod-1c5116c2-1279-41cf-8243-4afdae13d0d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026811482s Oct 5 17:52:33.842: INFO: Pod "pod-1c5116c2-1279-41cf-8243-4afdae13d0d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045612651s STEP: Saw pod success Oct 5 17:52:33.842: INFO: Pod "pod-1c5116c2-1279-41cf-8243-4afdae13d0d5" satisfied condition "Succeeded or Failed" Oct 5 17:52:33.845: INFO: Trying to get logs from node latest-worker2 pod pod-1c5116c2-1279-41cf-8243-4afdae13d0d5 container test-container: STEP: delete the pod Oct 5 17:52:33.865: INFO: Waiting for pod pod-1c5116c2-1279-41cf-8243-4afdae13d0d5 to disappear Oct 5 17:52:33.898: INFO: Pod pod-1c5116c2-1279-41cf-8243-4afdae13d0d5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:33.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-740" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3780,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:33.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-9d43aa46-db44-4d9d-89c4-1b4c6e299c6a STEP: Creating a pod to test consume secrets Oct 5 17:52:34.016: INFO: Waiting up to 5m0s for pod "pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0" in namespace "secrets-6369" to be "Succeeded or Failed" Oct 5 17:52:34.019: INFO: Pod "pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006205ms Oct 5 17:52:36.024: INFO: Pod "pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008159364s Oct 5 17:52:38.052: INFO: Pod "pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035409513s STEP: Saw pod success Oct 5 17:52:38.052: INFO: Pod "pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0" satisfied condition "Succeeded or Failed" Oct 5 17:52:38.055: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0 container secret-volume-test: STEP: delete the pod Oct 5 17:52:38.090: INFO: Waiting for pod pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0 to disappear Oct 5 17:52:38.097: INFO: Pod pod-secrets-bd1a1c97-f907-46c2-b0d3-0e5d6580d5c0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:52:38.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6369" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3791,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:52:38.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1005 17:52:50.000557 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 17:53:52.045: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 5 17:53:52.045: INFO: Deleting pod "simpletest-rc-to-be-deleted-8zgnf" in namespace "gc-1286" Oct 5 17:53:52.094: INFO: Deleting pod "simpletest-rc-to-be-deleted-9kxnb" in namespace "gc-1286" Oct 5 17:53:52.167: INFO: Deleting pod "simpletest-rc-to-be-deleted-c8vt6" in namespace "gc-1286" Oct 5 17:53:52.226: INFO: Deleting pod "simpletest-rc-to-be-deleted-fcvwj" in namespace "gc-1286" Oct 5 17:53:52.599: INFO: Deleting pod "simpletest-rc-to-be-deleted-gcfzw" in namespace "gc-1286" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:53:52.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1286" for this suite. • [SLOW TEST:74.657 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":219,"skipped":3804,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:53:52.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-3ed15b29-c0ba-4037-acc4-02e4b51a3721 STEP: Creating a pod to test consume configMaps Oct 5 17:53:53.395: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7" in namespace "projected-5953" to be "Succeeded or Failed" Oct 5 17:53:53.568: INFO: Pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 173.51869ms Oct 5 17:53:55.579: INFO: Pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184606204s Oct 5 17:53:57.583: INFO: Pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.188752515s Oct 5 17:54:00.383: INFO: Pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.988596989s STEP: Saw pod success Oct 5 17:54:00.383: INFO: Pod "pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7" satisfied condition "Succeeded or Failed" Oct 5 17:54:00.415: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7 container projected-configmap-volume-test: STEP: delete the pod Oct 5 17:54:00.514: INFO: Waiting for pod pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7 to disappear Oct 5 17:54:00.517: INFO: Pod pod-projected-configmaps-a72ea9b6-6d18-47e8-b284-fd9866cfd6e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:54:00.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5953" for this suite. • [SLOW TEST:7.763 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3807,"failed":0} SSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:54:00.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 5 17:54:12.752: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:12.752: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:12.789088 7 log.go:181] (0xc0017784d0) (0xc001c0bea0) Create stream I1005 17:54:12.789120 7 log.go:181] (0xc0017784d0) (0xc001c0bea0) Stream added, broadcasting: 1 I1005 17:54:12.791298 7 log.go:181] (0xc0017784d0) Reply frame received for 1 I1005 17:54:12.791349 7 log.go:181] (0xc0017784d0) (0xc0064b3180) Create stream I1005 17:54:12.791368 7 log.go:181] (0xc0017784d0) (0xc0064b3180) Stream added, broadcasting: 3 I1005 17:54:12.792470 7 log.go:181] (0xc0017784d0) Reply frame received for 3 I1005 17:54:12.792510 7 log.go:181] (0xc0017784d0) (0xc0012b4320) Create stream I1005 17:54:12.792526 7 log.go:181] (0xc0017784d0) (0xc0012b4320) Stream added, broadcasting: 5 I1005 17:54:12.793662 7 log.go:181] (0xc0017784d0) Reply frame received for 5 I1005 17:54:12.857424 7 log.go:181] (0xc0017784d0) Data frame received for 5 I1005 17:54:12.857469 7 log.go:181] (0xc0012b4320) (5) Data frame handling I1005 17:54:12.857495 7 log.go:181] (0xc0017784d0) Data frame received for 3 I1005 17:54:12.857508 7 log.go:181] (0xc0064b3180) (3) Data frame handling I1005 17:54:12.857528 7 log.go:181] (0xc0064b3180) (3) Data frame sent I1005 17:54:12.857541 7 log.go:181] (0xc0017784d0) Data frame received for 3 I1005 17:54:12.857553 7 log.go:181] (0xc0064b3180) (3) Data frame handling I1005 17:54:12.859069 7 log.go:181] (0xc0017784d0) Data frame received for 1 I1005 17:54:12.859092 7 log.go:181] (0xc001c0bea0) (1) Data frame handling I1005 17:54:12.859105 7 log.go:181] (0xc001c0bea0) (1) Data frame sent I1005 17:54:12.859123 7 log.go:181] (0xc0017784d0) (0xc001c0bea0) Stream removed, broadcasting: 1 I1005 17:54:12.859204 7 log.go:181] (0xc0017784d0) (0xc001c0bea0) Stream removed, broadcasting: 1 I1005 17:54:12.859221 7 log.go:181] (0xc0017784d0) (0xc0064b3180) Stream removed, broadcasting: 3 I1005 17:54:12.859292 7 log.go:181] (0xc0017784d0) Go away received I1005 17:54:12.859353 7 log.go:181] (0xc0017784d0) (0xc0012b4320) Stream removed, broadcasting: 5 Oct 5 17:54:12.859: INFO: Exec stderr: "" Oct 5 17:54:12.859: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:12.859: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:12.895097 7 log.go:181] (0xc000028a50) (0xc0012b46e0) Create stream I1005 17:54:12.895133 7 log.go:181] (0xc000028a50) (0xc0012b46e0) Stream added, broadcasting: 1 I1005 17:54:12.897441 7 log.go:181] (0xc000028a50) Reply frame received for 1 I1005 17:54:12.897501 7 log.go:181] (0xc000028a50) (0xc0064b3220) Create stream I1005 17:54:12.897519 7 log.go:181] (0xc000028a50) (0xc0064b3220) Stream added, broadcasting: 3 I1005 17:54:12.898447 7 log.go:181] (0xc000028a50) Reply frame received for 3 I1005 17:54:12.898477 7 log.go:181] (0xc000028a50) (0xc001c0bf40) Create stream I1005 17:54:12.898490 7 log.go:181] (0xc000028a50) (0xc001c0bf40) Stream added, broadcasting: 5 I1005 17:54:12.899887 7 log.go:181] (0xc000028a50) Reply frame received for 5 I1005 17:54:12.970040 7 log.go:181] (0xc000028a50) Data frame received for 3 I1005 17:54:12.970078 7 log.go:181] (0xc0064b3220) (3) Data frame handling I1005 17:54:12.970097 7 log.go:181] (0xc0064b3220) (3) Data frame sent I1005 17:54:12.970111 7 log.go:181] (0xc000028a50) Data frame received for 3 I1005 17:54:12.970122 7 log.go:181] (0xc0064b3220) (3) Data frame handling I1005 17:54:12.970160 7 log.go:181] (0xc000028a50) Data frame received for 5 I1005 17:54:12.970198 7 log.go:181] (0xc001c0bf40) (5) Data frame handling I1005 17:54:12.970999 7 log.go:181] (0xc000028a50) Data frame received for 1 I1005 17:54:12.971018 7 log.go:181] (0xc0012b46e0) (1) Data frame handling I1005 17:54:12.971032 7 log.go:181] (0xc0012b46e0) (1) Data frame sent I1005 17:54:12.971042 7 log.go:181] (0xc000028a50) (0xc0012b46e0) Stream removed, broadcasting: 1 I1005 17:54:12.971053 7 log.go:181] (0xc000028a50) Go away received I1005 17:54:12.971134 7 log.go:181] (0xc000028a50) (0xc0012b46e0) Stream removed, broadcasting: 1 I1005 17:54:12.971163 7 log.go:181] (0xc000028a50) (0xc0064b3220) Stream removed, broadcasting: 3 I1005 17:54:12.971177 7 log.go:181] (0xc000028a50) (0xc001c0bf40) Stream removed, broadcasting: 5 Oct 5 17:54:12.971: INFO: Exec stderr: "" Oct 5 17:54:12.971: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:12.971: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.024485 7 log.go:181] (0xc000029130) (0xc0012b4960) Create stream I1005 17:54:13.024506 7 log.go:181] (0xc000029130) (0xc0012b4960) Stream added, broadcasting: 1 I1005 17:54:13.027027 7 log.go:181] (0xc000029130) Reply frame received for 1 I1005 17:54:13.027064 7 log.go:181] (0xc000029130) (0xc006250aa0) Create stream I1005 17:54:13.027074 7 log.go:181] (0xc000029130) (0xc006250aa0) Stream added, broadcasting: 3 I1005 17:54:13.027999 7 log.go:181] (0xc000029130) Reply frame received for 3 I1005 17:54:13.028032 7 log.go:181] (0xc000029130) (0xc002b14000) Create stream I1005 17:54:13.028046 7 log.go:181] (0xc000029130) (0xc002b14000) Stream added, broadcasting: 5 I1005 17:54:13.029153 7 log.go:181] (0xc000029130) Reply frame received for 5 I1005 17:54:13.092561 7 log.go:181] (0xc000029130) Data frame received for 3 I1005 17:54:13.092602 7 log.go:181] (0xc006250aa0) (3) Data frame handling I1005 17:54:13.092618 7 log.go:181] (0xc006250aa0) (3) Data frame sent I1005 17:54:13.092633 7 log.go:181] (0xc000029130) Data frame received for 3 I1005 17:54:13.092642 7 log.go:181] (0xc006250aa0) (3) Data frame handling I1005 17:54:13.092666 7 log.go:181] (0xc000029130) Data frame received for 5 I1005 17:54:13.092675 7 log.go:181] (0xc002b14000) (5) Data frame handling I1005 17:54:13.093898 7 log.go:181] (0xc000029130) Data frame received for 1 I1005 17:54:13.093942 7 log.go:181] (0xc0012b4960) (1) Data frame handling I1005 17:54:13.093976 7 log.go:181] (0xc0012b4960) (1) Data frame sent I1005 17:54:13.094003 7 log.go:181] (0xc000029130) (0xc0012b4960) Stream removed, broadcasting: 1 I1005 17:54:13.094033 7 log.go:181] (0xc000029130) Go away received I1005 17:54:13.094072 7 log.go:181] (0xc000029130) (0xc0012b4960) Stream removed, broadcasting: 1 I1005 17:54:13.094099 7 log.go:181] (0xc000029130) (0xc006250aa0) Stream removed, broadcasting: 3 I1005 17:54:13.094111 7 log.go:181] (0xc000029130) (0xc002b14000) Stream removed, broadcasting: 5 Oct 5 17:54:13.094: INFO: Exec stderr: "" Oct 5 17:54:13.094: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.094: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.151183 7 log.go:181] (0xc0029d04d0) (0xc0023cd9a0) Create stream I1005 17:54:13.151217 7 log.go:181] (0xc0029d04d0) (0xc0023cd9a0) Stream added, broadcasting: 1 I1005 17:54:13.155964 7 log.go:181] (0xc0029d04d0) Reply frame received for 1 I1005 17:54:13.156029 7 log.go:181] (0xc0029d04d0) (0xc002b140a0) Create stream I1005 17:54:13.156059 7 log.go:181] (0xc0029d04d0) (0xc002b140a0) Stream added, broadcasting: 3 I1005 17:54:13.157156 7 log.go:181] (0xc0029d04d0) Reply frame received for 3 I1005 17:54:13.157205 7 log.go:181] (0xc0029d04d0) (0xc006250be0) Create stream I1005 17:54:13.157225 7 log.go:181] (0xc0029d04d0) (0xc006250be0) Stream added, broadcasting: 5 I1005 17:54:13.158204 7 log.go:181] (0xc0029d04d0) Reply frame received for 5 I1005 17:54:13.217669 7 log.go:181] (0xc0029d04d0) Data frame received for 3 I1005 17:54:13.217742 7 log.go:181] (0xc002b140a0) (3) Data frame handling I1005 17:54:13.217770 7 log.go:181] (0xc002b140a0) (3) Data frame sent I1005 17:54:13.217787 7 log.go:181] (0xc0029d04d0) Data frame received for 3 I1005 17:54:13.217796 7 log.go:181] (0xc002b140a0) (3) Data frame handling I1005 17:54:13.217835 7 log.go:181] (0xc0029d04d0) Data frame received for 5 I1005 17:54:13.217856 7 log.go:181] (0xc006250be0) (5) Data frame handling I1005 17:54:13.219209 7 log.go:181] (0xc0029d04d0) Data frame received for 1 I1005 17:54:13.219224 7 log.go:181] (0xc0023cd9a0) (1) Data frame handling I1005 17:54:13.219238 7 log.go:181] (0xc0023cd9a0) (1) Data frame sent I1005 17:54:13.219407 7 log.go:181] (0xc0029d04d0) (0xc0023cd9a0) Stream removed, broadcasting: 1 I1005 17:54:13.219478 7 log.go:181] (0xc0029d04d0) Go away received I1005 17:54:13.219621 7 log.go:181] (0xc0029d04d0) (0xc0023cd9a0) Stream removed, broadcasting: 1 I1005 17:54:13.219657 7 log.go:181] (0xc0029d04d0) (0xc002b140a0) Stream removed, broadcasting: 3 I1005 17:54:13.219685 7 log.go:181] (0xc0029d04d0) (0xc006250be0) Stream removed, broadcasting: 5 Oct 5 17:54:13.219: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 5 17:54:13.219: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.219: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.251335 7 log.go:181] (0xc0029d0bb0) (0xc0023cdcc0) Create stream I1005 17:54:13.251374 7 log.go:181] (0xc0029d0bb0) (0xc0023cdcc0) Stream added, broadcasting: 1 I1005 17:54:13.253162 7 log.go:181] (0xc0029d0bb0) Reply frame received for 1 I1005 17:54:13.253213 7 log.go:181] (0xc0029d0bb0) (0xc0012b4be0) Create stream I1005 17:54:13.253228 7 log.go:181] (0xc0029d0bb0) (0xc0012b4be0) Stream added, broadcasting: 3 I1005 17:54:13.254068 7 log.go:181] (0xc0029d0bb0) Reply frame received for 3 I1005 17:54:13.254103 7 log.go:181] (0xc0029d0bb0) (0xc0012b4c80) Create stream I1005 17:54:13.254116 7 log.go:181] (0xc0029d0bb0) (0xc0012b4c80) Stream added, broadcasting: 5 I1005 17:54:13.254955 7 log.go:181] (0xc0029d0bb0) Reply frame received for 5 I1005 17:54:13.322945 7 log.go:181] (0xc0029d0bb0) Data frame received for 5 I1005 17:54:13.322993 7 log.go:181] (0xc0012b4c80) (5) Data frame handling I1005 17:54:13.323019 7 log.go:181] (0xc0029d0bb0) Data frame received for 3 I1005 17:54:13.323044 7 log.go:181] (0xc0012b4be0) (3) Data frame handling I1005 17:54:13.323060 7 log.go:181] (0xc0012b4be0) (3) Data frame sent I1005 17:54:13.323072 7 log.go:181] (0xc0029d0bb0) Data frame received for 3 I1005 17:54:13.323089 7 log.go:181] (0xc0012b4be0) (3) Data frame handling I1005 17:54:13.324791 7 log.go:181] (0xc0029d0bb0) Data frame received for 1 I1005 17:54:13.324815 7 log.go:181] (0xc0023cdcc0) (1) Data frame handling I1005 17:54:13.324830 7 log.go:181] (0xc0023cdcc0) (1) Data frame sent I1005 17:54:13.324915 7 log.go:181] (0xc0029d0bb0) (0xc0023cdcc0) Stream removed, broadcasting: 1 I1005 17:54:13.324930 7 log.go:181] (0xc0029d0bb0) Go away received I1005 17:54:13.325080 7 log.go:181] (0xc0029d0bb0) (0xc0023cdcc0) Stream removed, broadcasting: 1 I1005 17:54:13.325121 7 log.go:181] (0xc0029d0bb0) (0xc0012b4be0) Stream removed, broadcasting: 3 I1005 17:54:13.325146 7 log.go:181] (0xc0029d0bb0) (0xc0012b4c80) Stream removed, broadcasting: 5 Oct 5 17:54:13.325: INFO: Exec stderr: "" Oct 5 17:54:13.325: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.325: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.360361 7 log.go:181] (0xc0006fbce0) (0xc006250fa0) Create stream I1005 17:54:13.360397 7 log.go:181] (0xc0006fbce0) (0xc006250fa0) Stream added, broadcasting: 1 I1005 17:54:13.362521 7 log.go:181] (0xc0006fbce0) Reply frame received for 1 I1005 17:54:13.362566 7 log.go:181] (0xc0006fbce0) (0xc0023cde00) Create stream I1005 17:54:13.362577 7 log.go:181] (0xc0006fbce0) (0xc0023cde00) Stream added, broadcasting: 3 I1005 17:54:13.363415 7 log.go:181] (0xc0006fbce0) Reply frame received for 3 I1005 17:54:13.363461 7 log.go:181] (0xc0006fbce0) (0xc002b14140) Create stream I1005 17:54:13.363476 7 log.go:181] (0xc0006fbce0) (0xc002b14140) Stream added, broadcasting: 5 I1005 17:54:13.364309 7 log.go:181] (0xc0006fbce0) Reply frame received for 5 I1005 17:54:13.423210 7 log.go:181] (0xc0006fbce0) Data frame received for 3 I1005 17:54:13.423262 7 log.go:181] (0xc0023cde00) (3) Data frame handling I1005 17:54:13.423291 7 log.go:181] (0xc0023cde00) (3) Data frame sent I1005 17:54:13.423333 7 log.go:181] (0xc0006fbce0) Data frame received for 3 I1005 17:54:13.423364 7 log.go:181] (0xc0023cde00) (3) Data frame handling I1005 17:54:13.423417 7 log.go:181] (0xc0006fbce0) Data frame received for 5 I1005 17:54:13.423447 7 log.go:181] (0xc002b14140) (5) Data frame handling I1005 17:54:13.425357 7 log.go:181] (0xc0006fbce0) Data frame received for 1 I1005 17:54:13.425398 7 log.go:181] (0xc006250fa0) (1) Data frame handling I1005 17:54:13.425435 7 log.go:181] (0xc006250fa0) (1) Data frame sent I1005 17:54:13.425532 7 log.go:181] (0xc0006fbce0) (0xc006250fa0) Stream removed, broadcasting: 1 I1005 17:54:13.425647 7 log.go:181] (0xc0006fbce0) (0xc006250fa0) Stream removed, broadcasting: 1 I1005 17:54:13.425681 7 log.go:181] (0xc0006fbce0) (0xc0023cde00) Stream removed, broadcasting: 3 I1005 17:54:13.425843 7 log.go:181] (0xc0006fbce0) Go away received I1005 17:54:13.425996 7 log.go:181] (0xc0006fbce0) (0xc002b14140) Stream removed, broadcasting: 5 Oct 5 17:54:13.426: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 5 17:54:13.426: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.426: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.458543 7 log.go:181] (0xc0029d1340) (0xc007b0e8c0) Create stream I1005 17:54:13.458581 7 log.go:181] (0xc0029d1340) (0xc007b0e8c0) Stream added, broadcasting: 1 I1005 17:54:13.460359 7 log.go:181] (0xc0029d1340) Reply frame received for 1 I1005 17:54:13.460391 7 log.go:181] (0xc0029d1340) (0xc002b141e0) Create stream I1005 17:54:13.460406 7 log.go:181] (0xc0029d1340) (0xc002b141e0) Stream added, broadcasting: 3 I1005 17:54:13.461441 7 log.go:181] (0xc0029d1340) Reply frame received for 3 I1005 17:54:13.461499 7 log.go:181] (0xc0029d1340) (0xc006251040) Create stream I1005 17:54:13.461515 7 log.go:181] (0xc0029d1340) (0xc006251040) Stream added, broadcasting: 5 I1005 17:54:13.462423 7 log.go:181] (0xc0029d1340) Reply frame received for 5 I1005 17:54:13.529796 7 log.go:181] (0xc0029d1340) Data frame received for 3 I1005 17:54:13.529859 7 log.go:181] (0xc002b141e0) (3) Data frame handling I1005 17:54:13.529873 7 log.go:181] (0xc002b141e0) (3) Data frame sent I1005 17:54:13.529879 7 log.go:181] (0xc0029d1340) Data frame received for 3 I1005 17:54:13.529883 7 log.go:181] (0xc002b141e0) (3) Data frame handling I1005 17:54:13.529915 7 log.go:181] (0xc0029d1340) Data frame received for 5 I1005 17:54:13.529954 7 log.go:181] (0xc006251040) (5) Data frame handling I1005 17:54:13.531975 7 log.go:181] (0xc0029d1340) Data frame received for 1 I1005 17:54:13.532004 7 log.go:181] (0xc007b0e8c0) (1) Data frame handling I1005 17:54:13.532025 7 log.go:181] (0xc007b0e8c0) (1) Data frame sent I1005 17:54:13.532042 7 log.go:181] (0xc0029d1340) (0xc007b0e8c0) Stream removed, broadcasting: 1 I1005 17:54:13.532123 7 log.go:181] (0xc0029d1340) (0xc007b0e8c0) Stream removed, broadcasting: 1 I1005 17:54:13.532145 7 log.go:181] (0xc0029d1340) (0xc002b141e0) Stream removed, broadcasting: 3 I1005 17:54:13.532179 7 log.go:181] (0xc0029d1340) (0xc006251040) Stream removed, broadcasting: 5 Oct 5 17:54:13.532: INFO: Exec stderr: "" Oct 5 17:54:13.532: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.532: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.532268 7 log.go:181] (0xc0029d1340) Go away received I1005 17:54:13.566824 7 log.go:181] (0xc0029d1a20) (0xc007b0eb40) Create stream I1005 17:54:13.566849 7 log.go:181] (0xc0029d1a20) (0xc007b0eb40) Stream added, broadcasting: 1 I1005 17:54:13.568984 7 log.go:181] (0xc0029d1a20) Reply frame received for 1 I1005 17:54:13.569049 7 log.go:181] (0xc0029d1a20) (0xc007b0ebe0) Create stream I1005 17:54:13.569073 7 log.go:181] (0xc0029d1a20) (0xc007b0ebe0) Stream added, broadcasting: 3 I1005 17:54:13.570041 7 log.go:181] (0xc0029d1a20) Reply frame received for 3 I1005 17:54:13.570084 7 log.go:181] (0xc0029d1a20) (0xc002b14280) Create stream I1005 17:54:13.570103 7 log.go:181] (0xc0029d1a20) (0xc002b14280) Stream added, broadcasting: 5 I1005 17:54:13.571210 7 log.go:181] (0xc0029d1a20) Reply frame received for 5 I1005 17:54:13.643441 7 log.go:181] (0xc0029d1a20) Data frame received for 5 I1005 17:54:13.643473 7 log.go:181] (0xc002b14280) (5) Data frame handling I1005 17:54:13.643522 7 log.go:181] (0xc0029d1a20) Data frame received for 3 I1005 17:54:13.643566 7 log.go:181] (0xc007b0ebe0) (3) Data frame handling I1005 17:54:13.643591 7 log.go:181] (0xc007b0ebe0) (3) Data frame sent I1005 17:54:13.643607 7 log.go:181] (0xc0029d1a20) Data frame received for 3 I1005 17:54:13.643620 7 log.go:181] (0xc007b0ebe0) (3) Data frame handling I1005 17:54:13.644971 7 log.go:181] (0xc0029d1a20) Data frame received for 1 I1005 17:54:13.644993 7 log.go:181] (0xc007b0eb40) (1) Data frame handling I1005 17:54:13.645009 7 log.go:181] (0xc007b0eb40) (1) Data frame sent I1005 17:54:13.645081 7 log.go:181] (0xc0029d1a20) (0xc007b0eb40) Stream removed, broadcasting: 1 I1005 17:54:13.645128 7 log.go:181] (0xc0029d1a20) (0xc007b0eb40) Stream removed, broadcasting: 1 I1005 17:54:13.645140 7 log.go:181] (0xc0029d1a20) (0xc007b0ebe0) Stream removed, broadcasting: 3 I1005 17:54:13.645287 7 log.go:181] (0xc0029d1a20) Go away received I1005 17:54:13.645446 7 log.go:181] (0xc0029d1a20) (0xc002b14280) Stream removed, broadcasting: 5 Oct 5 17:54:13.645: INFO: Exec stderr: "" Oct 5 17:54:13.645: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.645: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.682470 7 log.go:181] (0xc000517760) (0xc006251360) Create stream I1005 17:54:13.682495 7 log.go:181] (0xc000517760) (0xc006251360) Stream added, broadcasting: 1 I1005 17:54:13.687307 7 log.go:181] (0xc000517760) Reply frame received for 1 I1005 17:54:13.687374 7 log.go:181] (0xc000517760) (0xc002b14320) Create stream I1005 17:54:13.687405 7 log.go:181] (0xc000517760) (0xc002b14320) Stream added, broadcasting: 3 I1005 17:54:13.688617 7 log.go:181] (0xc000517760) Reply frame received for 3 I1005 17:54:13.688652 7 log.go:181] (0xc000517760) (0xc007b0ec80) Create stream I1005 17:54:13.688672 7 log.go:181] (0xc000517760) (0xc007b0ec80) Stream added, broadcasting: 5 I1005 17:54:13.689710 7 log.go:181] (0xc000517760) Reply frame received for 5 I1005 17:54:13.741992 7 log.go:181] (0xc000517760) Data frame received for 3 I1005 17:54:13.742049 7 log.go:181] (0xc002b14320) (3) Data frame handling I1005 17:54:13.742081 7 log.go:181] (0xc002b14320) (3) Data frame sent I1005 17:54:13.742111 7 log.go:181] (0xc000517760) Data frame received for 3 I1005 17:54:13.742142 7 log.go:181] (0xc000517760) Data frame received for 5 I1005 17:54:13.742187 7 log.go:181] (0xc007b0ec80) (5) Data frame handling I1005 17:54:13.742241 7 log.go:181] (0xc002b14320) (3) Data frame handling I1005 17:54:13.743944 7 log.go:181] (0xc000517760) Data frame received for 1 I1005 17:54:13.743992 7 log.go:181] (0xc006251360) (1) Data frame handling I1005 17:54:13.744030 7 log.go:181] (0xc006251360) (1) Data frame sent I1005 17:54:13.744059 7 log.go:181] (0xc000517760) (0xc006251360) Stream removed, broadcasting: 1 I1005 17:54:13.744088 7 log.go:181] (0xc000517760) Go away received I1005 17:54:13.744256 7 log.go:181] (0xc000517760) (0xc006251360) Stream removed, broadcasting: 1 I1005 17:54:13.744299 7 log.go:181] (0xc000517760) (0xc002b14320) Stream removed, broadcasting: 3 I1005 17:54:13.744329 7 log.go:181] (0xc000517760) (0xc007b0ec80) Stream removed, broadcasting: 5 Oct 5 17:54:13.744: INFO: Exec stderr: "" Oct 5 17:54:13.744: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3708 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 17:54:13.744: INFO: >>> kubeConfig: /root/.kube/config I1005 17:54:13.778918 7 log.go:181] (0xc000029c30) (0xc0012b55e0) Create stream I1005 17:54:13.778948 7 log.go:181] (0xc000029c30) (0xc0012b55e0) Stream added, broadcasting: 1 I1005 17:54:13.780361 7 log.go:181] (0xc000029c30) Reply frame received for 1 I1005 17:54:13.780395 7 log.go:181] (0xc000029c30) (0xc0012b5720) Create stream I1005 17:54:13.780409 7 log.go:181] (0xc000029c30) (0xc0012b5720) Stream added, broadcasting: 3 I1005 17:54:13.781344 7 log.go:181] (0xc000029c30) Reply frame received for 3 I1005 17:54:13.781385 7 log.go:181] (0xc000029c30) (0xc0064b32c0) Create stream I1005 17:54:13.781403 7 log.go:181] (0xc000029c30) (0xc0064b32c0) Stream added, broadcasting: 5 I1005 17:54:13.782179 7 log.go:181] (0xc000029c30) Reply frame received for 5 I1005 17:54:13.843972 7 log.go:181] (0xc000029c30) Data frame received for 5 I1005 17:54:13.844005 7 log.go:181] (0xc0064b32c0) (5) Data frame handling I1005 17:54:13.844025 7 log.go:181] (0xc000029c30) Data frame received for 3 I1005 17:54:13.844035 7 log.go:181] (0xc0012b5720) (3) Data frame handling I1005 17:54:13.844044 7 log.go:181] (0xc0012b5720) (3) Data frame sent I1005 17:54:13.844057 7 log.go:181] (0xc000029c30) Data frame received for 3 I1005 17:54:13.844069 7 log.go:181] (0xc0012b5720) (3) Data frame handling I1005 17:54:13.845348 7 log.go:181] (0xc000029c30) Data frame received for 1 I1005 17:54:13.845380 7 log.go:181] (0xc0012b55e0) (1) Data frame handling I1005 17:54:13.845526 7 log.go:181] (0xc0012b55e0) (1) Data frame sent I1005 17:54:13.845579 7 log.go:181] (0xc000029c30) (0xc0012b55e0) Stream removed, broadcasting: 1 I1005 17:54:13.845608 7 log.go:181] (0xc000029c30) Go away received I1005 17:54:13.845691 7 log.go:181] (0xc000029c30) (0xc0012b55e0) Stream removed, broadcasting: 1 I1005 17:54:13.845737 7 log.go:181] (0xc000029c30) (0xc0012b5720) Stream removed, broadcasting: 3 I1005 17:54:13.845756 7 log.go:181] (0xc000029c30) (0xc0064b32c0) Stream removed, broadcasting: 5 Oct 5 17:54:13.845: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:54:13.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3708" for this suite. • [SLOW TEST:13.329 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3810,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:54:13.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:54:27.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2831" for this suite. • [SLOW TEST:13.240 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":222,"skipped":3818,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:54:27.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:54:27.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-98" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":223,"skipped":3822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:54:27.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Oct 5 17:54:27.357: INFO: Waiting up to 5m0s for pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3" in namespace "var-expansion-1632" to be "Succeeded or Failed" Oct 5 17:54:27.371: INFO: Pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.590662ms Oct 5 17:54:29.375: INFO: Pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017952974s Oct 5 17:54:31.379: INFO: Pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021954633s Oct 5 17:54:33.383: INFO: Pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025668445s STEP: Saw pod success Oct 5 17:54:33.383: INFO: Pod "var-expansion-eef071a8-da35-4af5-9173-bea099943ef3" satisfied condition "Succeeded or Failed" Oct 5 17:54:33.386: INFO: Trying to get logs from node latest-worker pod var-expansion-eef071a8-da35-4af5-9173-bea099943ef3 container dapi-container: STEP: delete the pod Oct 5 17:54:33.415: INFO: Waiting for pod var-expansion-eef071a8-da35-4af5-9173-bea099943ef3 to disappear Oct 5 17:54:33.426: INFO: Pod var-expansion-eef071a8-da35-4af5-9173-bea099943ef3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:54:33.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1632" for this suite. • [SLOW TEST:6.185 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":224,"skipped":3847,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:54:33.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Oct 5 17:54:39.768: INFO: 10 pods remaining Oct 5 17:54:39.768: INFO: 8 pods has nil DeletionTimestamp Oct 5 17:54:39.768: INFO: Oct 5 17:54:41.449: INFO: 0 pods remaining Oct 5 17:54:41.449: INFO: 0 pods has nil DeletionTimestamp Oct 5 17:54:41.449: INFO: Oct 5 17:54:42.772: INFO: 0 pods remaining Oct 5 17:54:42.772: INFO: 0 pods has nil DeletionTimestamp Oct 5 17:54:42.772: INFO: STEP: Gathering metrics W1005 17:54:43.711207 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 17:55:45.727: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:55:45.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5197" for this suite. • [SLOW TEST:72.302 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":225,"skipped":3851,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:55:45.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Oct 5 17:55:45.816: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 5 17:55:45.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:46.170: INFO: stderr: "" Oct 5 17:55:46.170: INFO: stdout: "service/agnhost-replica created\n" Oct 5 17:55:46.171: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 5 17:55:46.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:46.516: INFO: stderr: "" Oct 5 17:55:46.517: INFO: stdout: "service/agnhost-primary created\n" Oct 5 17:55:46.517: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 5 17:55:46.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:46.831: INFO: stderr: "" Oct 5 17:55:46.831: INFO: stdout: "service/frontend created\n" Oct 5 17:55:46.832: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 5 17:55:46.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:47.095: INFO: stderr: "" Oct 5 17:55:47.095: INFO: stdout: "deployment.apps/frontend created\n" Oct 5 17:55:47.095: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 5 17:55:47.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:47.987: INFO: stderr: "" Oct 5 17:55:47.987: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 5 17:55:47.987: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 5 17:55:47.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5658' Oct 5 17:55:48.329: INFO: stderr: "" Oct 5 17:55:48.329: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 5 17:55:48.329: INFO: Waiting for all frontend pods to be Running. Oct 5 17:55:58.379: INFO: Waiting for frontend to serve content. Oct 5 17:55:58.393: INFO: Trying to add a new entry to the guestbook. Oct 5 17:55:58.407: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 5 17:55:58.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:58.559: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:58.559: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 5 17:55:58.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:58.758: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:58.758: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 5 17:55:58.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:58.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:58.907: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 5 17:55:58.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:59.021: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:59.021: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 5 17:55:59.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:59.226: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:59.226: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 5 17:55:59.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5658' Oct 5 17:55:59.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 17:55:59.735: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:55:59.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5658" for this suite. • [SLOW TEST:14.145 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":226,"skipped":3869,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:55:59.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-bf7876a4-a5b9-45e7-9c9a-21a3542c3243 STEP: Creating a pod to test consume configMaps Oct 5 17:56:00.607: INFO: Waiting up to 5m0s for pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca" in namespace "configmap-9332" to be "Succeeded or Failed" Oct 5 17:56:00.787: INFO: Pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca": Phase="Pending", Reason="", readiness=false. Elapsed: 179.327579ms Oct 5 17:56:02.820: INFO: Pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212879338s Oct 5 17:56:04.838: INFO: Pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230544752s Oct 5 17:56:06.843: INFO: Pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.235140013s STEP: Saw pod success Oct 5 17:56:06.843: INFO: Pod "pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca" satisfied condition "Succeeded or Failed" Oct 5 17:56:06.846: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca container configmap-volume-test: STEP: delete the pod Oct 5 17:56:06.945: INFO: Waiting for pod pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca to disappear Oct 5 17:56:06.951: INFO: Pod pod-configmaps-3bbaaa1e-7e82-4e02-b76c-5fa8e4ec4eca no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:06.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9332" for this suite. • [SLOW TEST:7.077 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3888,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:06.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 5 17:56:07.074: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1568 /api/v1/namespaces/watch-1568/configmaps/e2e-watch-test-resource-version fa343d18-e36b-4790-8a0f-1918f701df50 3413579 0 2020-10-05 17:56:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-05 17:56:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 17:56:07.074: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1568 /api/v1/namespaces/watch-1568/configmaps/e2e-watch-test-resource-version fa343d18-e36b-4790-8a0f-1918f701df50 3413580 0 2020-10-05 17:56:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-05 17:56:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:07.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1568" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":228,"skipped":3891,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:07.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 5 17:56:07.205: INFO: Waiting up to 5m0s for pod "pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7" in namespace "emptydir-8935" to be "Succeeded or Failed" Oct 5 17:56:07.208: INFO: Pod "pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.52238ms Oct 5 17:56:09.347: INFO: Pod "pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141963151s Oct 5 17:56:11.353: INFO: Pod "pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14774308s STEP: Saw pod success Oct 5 17:56:11.353: INFO: Pod "pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7" satisfied condition "Succeeded or Failed" Oct 5 17:56:11.355: INFO: Trying to get logs from node latest-worker pod pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7 container test-container: STEP: delete the pod Oct 5 17:56:11.397: INFO: Waiting for pod pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7 to disappear Oct 5 17:56:11.411: INFO: Pod pod-bbe4e58c-fc08-48cd-ac05-95b3c7537ce7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:11.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8935" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3903,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:11.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 17:56:11.482: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 17:56:11.490: INFO: Waiting for terminating namespaces to be deleted... Oct 5 17:56:11.492: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 5 17:56:11.497: INFO: kindnet-9tmlz from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:56:11.497: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 17:56:11.497: INFO: kube-proxy-fk9hq from kube-system started at 2020-09-23 08:30:39 +0000 UTC (1 container statuses recorded) Oct 5 17:56:11.497: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 17:56:11.497: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 5 17:56:11.501: INFO: kindnet-z6tnh from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:56:11.501: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 17:56:11.502: INFO: kube-proxy-whjz5 from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 17:56:11.502: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.163b2a45c1a61d86], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.163b2a45c47f171d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:12.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7943" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":230,"skipped":3916,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:12.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-t5cz STEP: Creating a pod to test atomic-volume-subpath Oct 5 17:56:12.617: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t5cz" in namespace "subpath-9648" to be "Succeeded or Failed" Oct 5 17:56:12.621: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Pending", Reason="", readiness=false. Elapsed: 3.323213ms Oct 5 17:56:14.625: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008040999s Oct 5 17:56:16.628: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 4.010799578s Oct 5 17:56:18.634: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 6.016701207s Oct 5 17:56:20.639: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 8.021565989s Oct 5 17:56:22.644: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 10.026974281s Oct 5 17:56:24.649: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 12.031308744s Oct 5 17:56:26.660: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 14.042896937s Oct 5 17:56:28.664: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 16.046728872s Oct 5 17:56:30.669: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 18.051429697s Oct 5 17:56:32.673: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 20.056137799s Oct 5 17:56:34.679: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Running", Reason="", readiness=true. Elapsed: 22.061187022s Oct 5 17:56:36.683: INFO: Pod "pod-subpath-test-configmap-t5cz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065543315s STEP: Saw pod success Oct 5 17:56:36.683: INFO: Pod "pod-subpath-test-configmap-t5cz" satisfied condition "Succeeded or Failed" Oct 5 17:56:36.686: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-t5cz container test-container-subpath-configmap-t5cz: STEP: delete the pod Oct 5 17:56:36.713: INFO: Waiting for pod pod-subpath-test-configmap-t5cz to disappear Oct 5 17:56:36.880: INFO: Pod pod-subpath-test-configmap-t5cz no longer exists STEP: Deleting pod pod-subpath-test-configmap-t5cz Oct 5 17:56:36.880: INFO: Deleting pod "pod-subpath-test-configmap-t5cz" in namespace "subpath-9648" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:36.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9648" for this suite. • [SLOW TEST:24.362 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":231,"skipped":3918,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:36.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Oct 5 17:56:41.574: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7369 pod-service-account-1d154f39-0e3f-4710-9ff7-2fda902f754d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 5 17:56:41.799: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7369 pod-service-account-1d154f39-0e3f-4710-9ff7-2fda902f754d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 5 17:56:42.030: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7369 pod-service-account-1d154f39-0e3f-4710-9ff7-2fda902f754d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:42.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7369" for this suite. • [SLOW TEST:5.376 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":232,"skipped":3928,"failed":0} [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:42.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 17:56:46.945: INFO: Successfully updated pod "labelsupdate0b7fab7d-3562-49a0-a359-90ce896c8ad6" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:50.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5348" for this suite. • [SLOW TEST:8.711 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3928,"failed":0} SSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:50.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:56:51.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9035" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":234,"skipped":3934,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:56:51.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:23.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5948" for this suite. STEP: Destroying namespace "nsdeletetest-9079" for this suite. Oct 5 17:57:23.396: INFO: Namespace nsdeletetest-9079 was already deleted STEP: Destroying namespace "nsdeletetest-260" for this suite. • [SLOW TEST:32.315 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":235,"skipped":3946,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:23.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Oct 5 17:57:23.505: INFO: Waiting up to 5m0s for pod "client-containers-e951e795-e5c5-4529-ab13-148827c625fc" in namespace "containers-735" to be "Succeeded or Failed" Oct 5 17:57:23.509: INFO: Pod "client-containers-e951e795-e5c5-4529-ab13-148827c625fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760413ms Oct 5 17:57:25.513: INFO: Pod "client-containers-e951e795-e5c5-4529-ab13-148827c625fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008115451s Oct 5 17:57:27.518: INFO: Pod "client-containers-e951e795-e5c5-4529-ab13-148827c625fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012299755s STEP: Saw pod success Oct 5 17:57:27.518: INFO: Pod "client-containers-e951e795-e5c5-4529-ab13-148827c625fc" satisfied condition "Succeeded or Failed" Oct 5 17:57:27.521: INFO: Trying to get logs from node latest-worker2 pod client-containers-e951e795-e5c5-4529-ab13-148827c625fc container test-container: STEP: delete the pod Oct 5 17:57:27.576: INFO: Waiting for pod client-containers-e951e795-e5c5-4529-ab13-148827c625fc to disappear Oct 5 17:57:27.581: INFO: Pod client-containers-e951e795-e5c5-4529-ab13-148827c625fc no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:27.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-735" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":3955,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:27.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 17:57:27.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc" in namespace "projected-424" to be "Succeeded or Failed" Oct 5 17:57:27.665: INFO: Pod "downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801065ms Oct 5 17:57:29.670: INFO: Pod "downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00845811s Oct 5 17:57:31.674: INFO: Pod "downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013108003s STEP: Saw pod success Oct 5 17:57:31.674: INFO: Pod "downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc" satisfied condition "Succeeded or Failed" Oct 5 17:57:31.677: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc container client-container: STEP: delete the pod Oct 5 17:57:31.697: INFO: Waiting for pod downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc to disappear Oct 5 17:57:31.716: INFO: Pod downwardapi-volume-9de5b320-0440-4783-ac0b-5d3a6d6b77fc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:31.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-424" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3955,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:31.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1e39e1de-9d69-401e-8ebf-fba6df7377dc STEP: Creating a pod to test consume configMaps Oct 5 17:57:31.883: INFO: Waiting up to 5m0s for pod "pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7" in namespace "configmap-4993" to be "Succeeded or Failed" Oct 5 17:57:31.893: INFO: Pod "pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261217ms Oct 5 17:57:33.900: INFO: Pod "pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016655643s Oct 5 17:57:35.905: INFO: Pod "pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021618221s STEP: Saw pod success Oct 5 17:57:35.905: INFO: Pod "pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7" satisfied condition "Succeeded or Failed" Oct 5 17:57:35.908: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7 container configmap-volume-test: STEP: delete the pod Oct 5 17:57:35.934: INFO: Waiting for pod pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7 to disappear Oct 5 17:57:35.947: INFO: Pod pod-configmaps-c23d8c2e-2370-4be4-91b7-261bdb6b5cf7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:35.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4993" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":238,"skipped":3975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:35.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 5 17:57:41.075: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:41.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5268" for this suite. • [SLOW TEST:5.250 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":239,"skipped":3998,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:41.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:46.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7605" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":240,"skipped":4008,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:46.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 17:57:47.450: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 17:57:49.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 17:57:51.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517467, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 17:57:54.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:54.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1554" for this suite. STEP: Destroying namespace "webhook-1554-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":241,"skipped":4008,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:54.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Oct 5 17:57:54.861: INFO: Waiting up to 5m0s for pod "var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d" in namespace "var-expansion-9728" to be "Succeeded or Failed" Oct 5 17:57:54.893: INFO: Pod "var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d": Phase="Pending", Reason="", readiness=false. Elapsed: 32.648419ms Oct 5 17:57:56.897: INFO: Pod "var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036661377s Oct 5 17:57:58.902: INFO: Pod "var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041363742s STEP: Saw pod success Oct 5 17:57:58.902: INFO: Pod "var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d" satisfied condition "Succeeded or Failed" Oct 5 17:57:58.906: INFO: Trying to get logs from node latest-worker pod var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d container dapi-container: STEP: delete the pod Oct 5 17:57:59.039: INFO: Waiting for pod var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d to disappear Oct 5 17:57:59.081: INFO: Pod var-expansion-ec5820e4-1c92-4789-87fd-3fef8e76a95d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:57:59.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9728" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":4020,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:57:59.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 5 17:57:59.200: INFO: >>> kubeConfig: /root/.kube/config Oct 5 17:58:02.180: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:58:14.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3641" for this suite. • [SLOW TEST:14.957 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":243,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:58:14.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:58:14.127: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e" in namespace "security-context-test-5408" to be "Succeeded or Failed" Oct 5 17:58:14.147: INFO: Pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.625451ms Oct 5 17:58:16.152: INFO: Pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02521065s Oct 5 17:58:18.157: INFO: Pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029723026s Oct 5 17:58:20.161: INFO: Pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034154878s Oct 5 17:58:20.161: INFO: Pod "alpine-nnp-false-d8ea8d0e-ce30-4423-8ad1-05c1dc3c6c1e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:58:20.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5408" for this suite. • [SLOW TEST:6.123 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":4042,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:58:20.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Oct 5 17:58:20.234: INFO: Waiting up to 5m0s for pod "client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d" in namespace "containers-8959" to be "Succeeded or Failed" Oct 5 17:58:20.282: INFO: Pod "client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.722763ms Oct 5 17:58:22.287: INFO: Pod "client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052695448s Oct 5 17:58:24.300: INFO: Pod "client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065992397s STEP: Saw pod success Oct 5 17:58:24.300: INFO: Pod "client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d" satisfied condition "Succeeded or Failed" Oct 5 17:58:24.303: INFO: Trying to get logs from node latest-worker2 pod client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d container test-container: STEP: delete the pod Oct 5 17:58:24.339: INFO: Waiting for pod client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d to disappear Oct 5 17:58:24.350: INFO: Pod client-containers-c8b0c425-69d2-4e7b-b3e1-47d5bf6d853d no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:58:24.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8959" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:58:24.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 17:58:24.666: INFO: Create a RollingUpdate DaemonSet Oct 5 17:58:24.728: INFO: Check that daemon pods launch on every node of the cluster Oct 5 17:58:24.740: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:24.745: INFO: Number of nodes with available pods: 0 Oct 5 17:58:24.745: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:58:25.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:25.754: INFO: Number of nodes with available pods: 0 Oct 5 17:58:25.754: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:58:26.895: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:26.898: INFO: Number of nodes with available pods: 0 Oct 5 17:58:26.898: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:58:27.811: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:27.815: INFO: Number of nodes with available pods: 0 Oct 5 17:58:27.815: INFO: Node latest-worker is running more than one daemon pod Oct 5 17:58:28.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:28.753: INFO: Number of nodes with available pods: 1 Oct 5 17:58:28.753: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 17:58:29.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:29.754: INFO: Number of nodes with available pods: 2 Oct 5 17:58:29.754: INFO: Number of running nodes: 2, number of available pods: 2 Oct 5 17:58:29.754: INFO: Update the DaemonSet to trigger a rollout Oct 5 17:58:29.761: INFO: Updating DaemonSet daemon-set Oct 5 17:58:40.857: INFO: Roll back the DaemonSet before rollout is complete Oct 5 17:58:40.864: INFO: Updating DaemonSet daemon-set Oct 5 17:58:40.864: INFO: Make sure DaemonSet rollback is complete Oct 5 17:58:40.902: INFO: Wrong image for pod: daemon-set-6xg7v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 5 17:58:40.902: INFO: Pod daemon-set-6xg7v is not available Oct 5 17:58:40.915: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:41.921: INFO: Wrong image for pod: daemon-set-6xg7v. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 5 17:58:41.921: INFO: Pod daemon-set-6xg7v is not available Oct 5 17:58:41.926: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 17:58:42.921: INFO: Pod daemon-set-kps7r is not available Oct 5 17:58:42.925: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-758, will wait for the garbage collector to delete the pods Oct 5 17:58:42.994: INFO: Deleting DaemonSet.extensions daemon-set took: 7.270557ms Oct 5 17:58:43.394: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.230052ms Oct 5 17:58:46.797: INFO: Number of nodes with available pods: 0 Oct 5 17:58:46.797: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 17:58:46.821: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-758/daemonsets","resourceVersion":"3414708"},"items":null} Oct 5 17:58:46.823: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-758/pods","resourceVersion":"3414708"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 17:58:46.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-758" for this suite. • [SLOW TEST:22.480 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":246,"skipped":4083,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:58:46.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 17:58:46.908: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 17:59:46.931: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 17:59:46.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 5 17:59:51.087: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:00:11.274: INFO: pods created so far: [1 1 1] Oct 5 18:00:11.274: INFO: length of pods created so far: 3 Oct 5 18:00:19.283: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:00:26.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4244" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:00:26.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9275" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:99.700 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":247,"skipped":4089,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:00:26.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 18:00:26.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c" in namespace "projected-2948" to be "Succeeded or Failed" Oct 5 18:00:26.660: INFO: Pod "downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.339804ms Oct 5 18:00:28.775: INFO: Pod "downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123796561s Oct 5 18:00:30.779: INFO: Pod "downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128257686s STEP: Saw pod success Oct 5 18:00:30.779: INFO: Pod "downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c" satisfied condition "Succeeded or Failed" Oct 5 18:00:30.783: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c container client-container: STEP: delete the pod Oct 5 18:00:30.831: INFO: Waiting for pod downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c to disappear Oct 5 18:00:30.863: INFO: Pod downwardapi-volume-74ac64a5-05d3-43df-a7d1-f2ed97f7e37c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:00:30.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2948" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":4149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:00:30.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 18:00:30.922: INFO: PodSpec: initContainers in spec.initContainers Oct 5 18:01:21.045: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e29bf8e6-6289-476f-84f6-d7bb67841cdf", GenerateName:"", Namespace:"init-container-2692", SelfLink:"/api/v1/namespaces/init-container-2692/pods/pod-init-e29bf8e6-6289-476f-84f6-d7bb67841cdf", UID:"1fc3ea93-a6fc-4c86-8c9c-43d9dc98e784", ResourceVersion:"3415424", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63737517630, loc:(*time.Location)(0x7701840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"922502288"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003692220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003692240)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003692260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003692280)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2cxfh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0058de100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2cxfh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2cxfh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2cxfh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0051402b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003718150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005140340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005140360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005140368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00514036c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0017fe060), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517631, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517631, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517631, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737517630, loc:(*time.Location)(0x7701840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.62", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.62"}}, StartTime:(*v1.Time)(0xc0036922a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003718230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0037182a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://5f9052259b372bf796d972e9a8dc29b807832c97b83c1859a7ecfc799ebeaf82", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036922e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036922c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0051403ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:01:21.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2692" for this suite. • [SLOW TEST:50.229 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":249,"skipped":4194,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:01:21.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:01:32.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-558" for this suite. • [SLOW TEST:11.141 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":250,"skipped":4201,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:01:32.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 5 18:01:32.343: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415480 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:01:32.343: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415480 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 5 18:01:42.354: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415518 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:01:42.354: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415518 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 5 18:01:52.362: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415550 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:01:52.363: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415550 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 5 18:02:02.372: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415580 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:02:02.372: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-a 03f2a28d-5fc7-4a77-b549-2b0f1b73fe3d 3415580 0 2020-10-05 18:01:32 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 18:01:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 5 18:02:12.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-b 4a9102ed-015f-4acf-8b71-5f71ceb0bb0e 3415610 0 2020-10-05 18:02:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 18:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:02:12.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-b 4a9102ed-015f-4acf-8b71-5f71ceb0bb0e 3415610 0 2020-10-05 18:02:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 18:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 5 18:02:22.390: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-b 4a9102ed-015f-4acf-8b71-5f71ceb0bb0e 3415640 0 2020-10-05 18:02:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 18:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 18:02:22.391: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3084 /api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-configmap-b 4a9102ed-015f-4acf-8b71-5f71ceb0bb0e 3415640 0 2020-10-05 18:02:12 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 18:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:02:32.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3084" for this suite. • [SLOW TEST:60.158 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":251,"skipped":4205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:02:32.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 5 18:02:32.560: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:32.564: INFO: Number of nodes with available pods: 0 Oct 5 18:02:32.564: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:02:33.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:33.665: INFO: Number of nodes with available pods: 0 Oct 5 18:02:33.666: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:02:34.686: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:34.691: INFO: Number of nodes with available pods: 0 Oct 5 18:02:34.691: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:02:35.583: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:35.587: INFO: Number of nodes with available pods: 0 Oct 5 18:02:35.587: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:02:36.569: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:36.572: INFO: Number of nodes with available pods: 1 Oct 5 18:02:36.572: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:02:37.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:37.575: INFO: Number of nodes with available pods: 2 Oct 5 18:02:37.575: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 5 18:02:37.658: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:37.693: INFO: Number of nodes with available pods: 1 Oct 5 18:02:37.693: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:02:38.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:38.703: INFO: Number of nodes with available pods: 1 Oct 5 18:02:38.703: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:02:39.716: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:39.719: INFO: Number of nodes with available pods: 1 Oct 5 18:02:39.719: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:02:40.714: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:02:40.717: INFO: Number of nodes with available pods: 2 Oct 5 18:02:40.717: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4198, will wait for the garbage collector to delete the pods Oct 5 18:02:40.795: INFO: Deleting DaemonSet.extensions daemon-set took: 21.014888ms Oct 5 18:02:41.195: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.332677ms Oct 5 18:02:49.943: INFO: Number of nodes with available pods: 0 Oct 5 18:02:49.943: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 18:02:49.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4198/daemonsets","resourceVersion":"3415782"},"items":null} Oct 5 18:02:49.955: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4198/pods","resourceVersion":"3415783"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:02:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4198" for this suite. • [SLOW TEST:17.566 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":252,"skipped":4237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:02:49.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 18:02:50.580: INFO: starting watch STEP: patching STEP: updating Oct 5 18:02:50.617: INFO: waiting for watch events with expected annotations Oct 5 18:02:50.617: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:02:50.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8158" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":253,"skipped":4260,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:02:50.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-307ade68-9480-4376-b809-682bbca74c57 in namespace container-probe-8671 Oct 5 18:02:54.847: INFO: Started pod liveness-307ade68-9480-4376-b809-682bbca74c57 in namespace container-probe-8671 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 18:02:54.850: INFO: Initial restart count of pod liveness-307ade68-9480-4376-b809-682bbca74c57 is 0 Oct 5 18:03:08.888: INFO: Restart count of pod container-probe-8671/liveness-307ade68-9480-4376-b809-682bbca74c57 is now 1 (14.037444004s elapsed) Oct 5 18:03:28.942: INFO: Restart count of pod container-probe-8671/liveness-307ade68-9480-4376-b809-682bbca74c57 is now 2 (34.091780852s elapsed) Oct 5 18:03:48.987: INFO: Restart count of pod container-probe-8671/liveness-307ade68-9480-4376-b809-682bbca74c57 is now 3 (54.136774042s elapsed) Oct 5 18:04:07.417: INFO: Restart count of pod container-probe-8671/liveness-307ade68-9480-4376-b809-682bbca74c57 is now 4 (1m12.566885068s elapsed) Oct 5 18:05:21.764: INFO: Restart count of pod container-probe-8671/liveness-307ade68-9480-4376-b809-682bbca74c57 is now 5 (2m26.9137475s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:05:21.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8671" for this suite. • [SLOW TEST:151.047 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4265,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:05:21.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3200 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3200 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3200 Oct 5 18:05:22.292: INFO: Found 0 stateful pods, waiting for 1 Oct 5 18:05:32.297: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 5 18:05:32.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 18:05:36.168: INFO: stderr: "I1005 18:05:36.021697 3168 log.go:181] (0xc000800000) (0xc000d0e1e0) Create stream\nI1005 18:05:36.021758 3168 log.go:181] (0xc000800000) (0xc000d0e1e0) Stream added, broadcasting: 1\nI1005 18:05:36.023780 3168 log.go:181] (0xc000800000) Reply frame received for 1\nI1005 18:05:36.023868 3168 log.go:181] (0xc000800000) (0xc0004e6c80) Create stream\nI1005 18:05:36.023900 3168 log.go:181] (0xc000800000) (0xc0004e6c80) Stream added, broadcasting: 3\nI1005 18:05:36.024745 3168 log.go:181] (0xc000800000) Reply frame received for 3\nI1005 18:05:36.024785 3168 log.go:181] (0xc000800000) (0xc000c70000) Create stream\nI1005 18:05:36.024797 3168 log.go:181] (0xc000800000) (0xc000c70000) Stream added, broadcasting: 5\nI1005 18:05:36.025716 3168 log.go:181] (0xc000800000) Reply frame received for 5\nI1005 18:05:36.128978 3168 log.go:181] (0xc000800000) Data frame received for 5\nI1005 18:05:36.129012 3168 log.go:181] (0xc000c70000) (5) Data frame handling\nI1005 18:05:36.129034 3168 log.go:181] (0xc000c70000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 18:05:36.161042 3168 log.go:181] (0xc000800000) Data frame received for 3\nI1005 18:05:36.161085 3168 log.go:181] (0xc0004e6c80) (3) Data frame handling\nI1005 18:05:36.161121 3168 log.go:181] (0xc0004e6c80) (3) Data frame sent\nI1005 18:05:36.161396 3168 log.go:181] (0xc000800000) Data frame received for 3\nI1005 18:05:36.161430 3168 log.go:181] (0xc0004e6c80) (3) Data frame handling\nI1005 18:05:36.161640 3168 log.go:181] (0xc000800000) Data frame received for 5\nI1005 18:05:36.161673 3168 log.go:181] (0xc000c70000) (5) Data frame handling\nI1005 18:05:36.163339 3168 log.go:181] (0xc000800000) Data frame received for 1\nI1005 18:05:36.163379 3168 log.go:181] (0xc000d0e1e0) (1) Data frame handling\nI1005 18:05:36.163402 3168 log.go:181] (0xc000d0e1e0) (1) Data frame sent\nI1005 18:05:36.163444 3168 log.go:181] (0xc000800000) (0xc000d0e1e0) Stream removed, broadcasting: 1\nI1005 18:05:36.163486 3168 log.go:181] (0xc000800000) Go away received\nI1005 18:05:36.163880 3168 log.go:181] (0xc000800000) (0xc000d0e1e0) Stream removed, broadcasting: 1\nI1005 18:05:36.163905 3168 log.go:181] (0xc000800000) (0xc0004e6c80) Stream removed, broadcasting: 3\nI1005 18:05:36.163917 3168 log.go:181] (0xc000800000) (0xc000c70000) Stream removed, broadcasting: 5\n" Oct 5 18:05:36.169: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 18:05:36.169: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 18:05:36.176: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 5 18:05:46.189: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 18:05:46.189: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 18:05:46.207: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:05:46.207: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:05:46.207: INFO: Oct 5 18:05:46.207: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 5 18:05:47.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995949096s Oct 5 18:05:49.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989715681s Oct 5 18:05:50.381: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.825930497s Oct 5 18:05:51.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.822681288s Oct 5 18:05:52.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.819142019s Oct 5 18:05:53.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.81428054s Oct 5 18:05:54.446: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.761901723s Oct 5 18:05:55.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 756.765318ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3200 Oct 5 18:05:56.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:05:56.682: INFO: stderr: "I1005 18:05:56.594714 3186 log.go:181] (0xc0007b76b0) (0xc0007aeaa0) Create stream\nI1005 18:05:56.594792 3186 log.go:181] (0xc0007b76b0) (0xc0007aeaa0) Stream added, broadcasting: 1\nI1005 18:05:56.600462 3186 log.go:181] (0xc0007b76b0) Reply frame received for 1\nI1005 18:05:56.600506 3186 log.go:181] (0xc0007b76b0) (0xc000cb00a0) Create stream\nI1005 18:05:56.600527 3186 log.go:181] (0xc0007b76b0) (0xc000cb00a0) Stream added, broadcasting: 3\nI1005 18:05:56.601628 3186 log.go:181] (0xc0007b76b0) Reply frame received for 3\nI1005 18:05:56.601678 3186 log.go:181] (0xc0007b76b0) (0xc0007ae000) Create stream\nI1005 18:05:56.601693 3186 log.go:181] (0xc0007b76b0) (0xc0007ae000) Stream added, broadcasting: 5\nI1005 18:05:56.602542 3186 log.go:181] (0xc0007b76b0) Reply frame received for 5\nI1005 18:05:56.674821 3186 log.go:181] (0xc0007b76b0) Data frame received for 5\nI1005 18:05:56.674879 3186 log.go:181] (0xc0007ae000) (5) Data frame handling\nI1005 18:05:56.674902 3186 log.go:181] (0xc0007ae000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 18:05:56.674931 3186 log.go:181] (0xc0007b76b0) Data frame received for 3\nI1005 18:05:56.674941 3186 log.go:181] (0xc000cb00a0) (3) Data frame handling\nI1005 18:05:56.674951 3186 log.go:181] (0xc000cb00a0) (3) Data frame sent\nI1005 18:05:56.674964 3186 log.go:181] (0xc0007b76b0) Data frame received for 3\nI1005 18:05:56.674975 3186 log.go:181] (0xc000cb00a0) (3) Data frame handling\nI1005 18:05:56.675031 3186 log.go:181] (0xc0007b76b0) Data frame received for 5\nI1005 18:05:56.675054 3186 log.go:181] (0xc0007ae000) (5) Data frame handling\nI1005 18:05:56.676457 3186 log.go:181] (0xc0007b76b0) Data frame received for 1\nI1005 18:05:56.676591 3186 log.go:181] (0xc0007aeaa0) (1) Data frame handling\nI1005 18:05:56.676620 3186 log.go:181] (0xc0007aeaa0) (1) Data frame sent\nI1005 18:05:56.676636 3186 log.go:181] (0xc0007b76b0) (0xc0007aeaa0) Stream removed, broadcasting: 1\nI1005 18:05:56.676653 3186 log.go:181] (0xc0007b76b0) Go away received\nI1005 18:05:56.677104 3186 log.go:181] (0xc0007b76b0) (0xc0007aeaa0) Stream removed, broadcasting: 1\nI1005 18:05:56.677124 3186 log.go:181] (0xc0007b76b0) (0xc000cb00a0) Stream removed, broadcasting: 3\nI1005 18:05:56.677135 3186 log.go:181] (0xc0007b76b0) (0xc0007ae000) Stream removed, broadcasting: 5\n" Oct 5 18:05:56.682: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 18:05:56.682: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 18:05:56.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:05:56.900: INFO: stderr: "I1005 18:05:56.817006 3205 log.go:181] (0xc000e8f080) (0xc000203220) Create stream\nI1005 18:05:56.817063 3205 log.go:181] (0xc000e8f080) (0xc000203220) Stream added, broadcasting: 1\nI1005 18:05:56.821842 3205 log.go:181] (0xc000e8f080) Reply frame received for 1\nI1005 18:05:56.821898 3205 log.go:181] (0xc000e8f080) (0xc000d46000) Create stream\nI1005 18:05:56.821913 3205 log.go:181] (0xc000e8f080) (0xc000d46000) Stream added, broadcasting: 3\nI1005 18:05:56.822841 3205 log.go:181] (0xc000e8f080) Reply frame received for 3\nI1005 18:05:56.822867 3205 log.go:181] (0xc000e8f080) (0xc0002021e0) Create stream\nI1005 18:05:56.822873 3205 log.go:181] (0xc000e8f080) (0xc0002021e0) Stream added, broadcasting: 5\nI1005 18:05:56.824619 3205 log.go:181] (0xc000e8f080) Reply frame received for 5\nI1005 18:05:56.891830 3205 log.go:181] (0xc000e8f080) Data frame received for 3\nI1005 18:05:56.891869 3205 log.go:181] (0xc000d46000) (3) Data frame handling\nI1005 18:05:56.891900 3205 log.go:181] (0xc000e8f080) Data frame received for 5\nI1005 18:05:56.891924 3205 log.go:181] (0xc0002021e0) (5) Data frame handling\nI1005 18:05:56.891934 3205 log.go:181] (0xc0002021e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1005 18:05:56.891950 3205 log.go:181] (0xc000d46000) (3) Data frame sent\nI1005 18:05:56.891958 3205 log.go:181] (0xc000e8f080) Data frame received for 3\nI1005 18:05:56.891964 3205 log.go:181] (0xc000d46000) (3) Data frame handling\nI1005 18:05:56.892155 3205 log.go:181] (0xc000e8f080) Data frame received for 5\nI1005 18:05:56.892172 3205 log.go:181] (0xc0002021e0) (5) Data frame handling\nI1005 18:05:56.894289 3205 log.go:181] (0xc000e8f080) Data frame received for 1\nI1005 18:05:56.894309 3205 log.go:181] (0xc000203220) (1) Data frame handling\nI1005 18:05:56.894321 3205 log.go:181] (0xc000203220) (1) Data frame sent\nI1005 18:05:56.894337 3205 log.go:181] (0xc000e8f080) (0xc000203220) Stream removed, broadcasting: 1\nI1005 18:05:56.894454 3205 log.go:181] (0xc000e8f080) Go away received\nI1005 18:05:56.894724 3205 log.go:181] (0xc000e8f080) (0xc000203220) Stream removed, broadcasting: 1\nI1005 18:05:56.894744 3205 log.go:181] (0xc000e8f080) (0xc000d46000) Stream removed, broadcasting: 3\nI1005 18:05:56.894752 3205 log.go:181] (0xc000e8f080) (0xc0002021e0) Stream removed, broadcasting: 5\n" Oct 5 18:05:56.900: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 18:05:56.900: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 18:05:56.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:05:57.118: INFO: stderr: "I1005 18:05:57.040286 3223 log.go:181] (0xc00028e000) (0xc000982280) Create stream\nI1005 18:05:57.040368 3223 log.go:181] (0xc00028e000) (0xc000982280) Stream added, broadcasting: 1\nI1005 18:05:57.042678 3223 log.go:181] (0xc00028e000) Reply frame received for 1\nI1005 18:05:57.042712 3223 log.go:181] (0xc00028e000) (0xc000982320) Create stream\nI1005 18:05:57.042720 3223 log.go:181] (0xc00028e000) (0xc000982320) Stream added, broadcasting: 3\nI1005 18:05:57.043874 3223 log.go:181] (0xc00028e000) Reply frame received for 3\nI1005 18:05:57.043925 3223 log.go:181] (0xc00028e000) (0xc00072c000) Create stream\nI1005 18:05:57.043944 3223 log.go:181] (0xc00028e000) (0xc00072c000) Stream added, broadcasting: 5\nI1005 18:05:57.046481 3223 log.go:181] (0xc00028e000) Reply frame received for 5\nI1005 18:05:57.110392 3223 log.go:181] (0xc00028e000) Data frame received for 3\nI1005 18:05:57.110423 3223 log.go:181] (0xc000982320) (3) Data frame handling\nI1005 18:05:57.110432 3223 log.go:181] (0xc000982320) (3) Data frame sent\nI1005 18:05:57.110445 3223 log.go:181] (0xc00028e000) Data frame received for 3\nI1005 18:05:57.110456 3223 log.go:181] (0xc000982320) (3) Data frame handling\nI1005 18:05:57.110465 3223 log.go:181] (0xc00028e000) Data frame received for 5\nI1005 18:05:57.110470 3223 log.go:181] (0xc00072c000) (5) Data frame handling\nI1005 18:05:57.110484 3223 log.go:181] (0xc00072c000) (5) Data frame sent\nI1005 18:05:57.110489 3223 log.go:181] (0xc00028e000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1005 18:05:57.110495 3223 log.go:181] (0xc00072c000) (5) Data frame handling\nI1005 18:05:57.112225 3223 log.go:181] (0xc00028e000) Data frame received for 1\nI1005 18:05:57.112261 3223 log.go:181] (0xc000982280) (1) Data frame handling\nI1005 18:05:57.112283 3223 log.go:181] (0xc000982280) (1) Data frame sent\nI1005 18:05:57.112306 3223 log.go:181] (0xc00028e000) (0xc000982280) Stream removed, broadcasting: 1\nI1005 18:05:57.112530 3223 log.go:181] (0xc00028e000) Go away received\nI1005 18:05:57.113161 3223 log.go:181] (0xc00028e000) (0xc000982280) Stream removed, broadcasting: 1\nI1005 18:05:57.113195 3223 log.go:181] (0xc00028e000) (0xc000982320) Stream removed, broadcasting: 3\nI1005 18:05:57.113215 3223 log.go:181] (0xc00028e000) (0xc00072c000) Stream removed, broadcasting: 5\n" Oct 5 18:05:57.118: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 18:05:57.118: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 18:05:57.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Oct 5 18:06:07.148: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 18:06:07.148: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 18:06:07.148: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 5 18:06:07.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 18:06:07.383: INFO: stderr: "I1005 18:06:07.289550 3241 log.go:181] (0xc000219550) (0xc000d08a00) Create stream\nI1005 18:06:07.289598 3241 log.go:181] (0xc000219550) (0xc000d08a00) Stream added, broadcasting: 1\nI1005 18:06:07.294215 3241 log.go:181] (0xc000219550) Reply frame received for 1\nI1005 18:06:07.294268 3241 log.go:181] (0xc000219550) (0xc000d08aa0) Create stream\nI1005 18:06:07.294429 3241 log.go:181] (0xc000219550) (0xc000d08aa0) Stream added, broadcasting: 3\nI1005 18:06:07.295678 3241 log.go:181] (0xc000219550) Reply frame received for 3\nI1005 18:06:07.295734 3241 log.go:181] (0xc000219550) (0xc000c86000) Create stream\nI1005 18:06:07.295751 3241 log.go:181] (0xc000219550) (0xc000c86000) Stream added, broadcasting: 5\nI1005 18:06:07.296649 3241 log.go:181] (0xc000219550) Reply frame received for 5\nI1005 18:06:07.377155 3241 log.go:181] (0xc000219550) Data frame received for 3\nI1005 18:06:07.377197 3241 log.go:181] (0xc000d08aa0) (3) Data frame handling\nI1005 18:06:07.377213 3241 log.go:181] (0xc000d08aa0) (3) Data frame sent\nI1005 18:06:07.377224 3241 log.go:181] (0xc000219550) Data frame received for 3\nI1005 18:06:07.377238 3241 log.go:181] (0xc000d08aa0) (3) Data frame handling\nI1005 18:06:07.377270 3241 log.go:181] (0xc000219550) Data frame received for 5\nI1005 18:06:07.377294 3241 log.go:181] (0xc000c86000) (5) Data frame handling\nI1005 18:06:07.377318 3241 log.go:181] (0xc000c86000) (5) Data frame sent\nI1005 18:06:07.377331 3241 log.go:181] (0xc000219550) Data frame received for 5\nI1005 18:06:07.377339 3241 log.go:181] (0xc000c86000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 18:06:07.378409 3241 log.go:181] (0xc000219550) Data frame received for 1\nI1005 18:06:07.378431 3241 log.go:181] (0xc000d08a00) (1) Data frame handling\nI1005 18:06:07.378444 3241 log.go:181] (0xc000d08a00) (1) Data frame sent\nI1005 18:06:07.378454 3241 log.go:181] (0xc000219550) (0xc000d08a00) Stream removed, broadcasting: 1\nI1005 18:06:07.378475 3241 log.go:181] (0xc000219550) Go away received\nI1005 18:06:07.378907 3241 log.go:181] (0xc000219550) (0xc000d08a00) Stream removed, broadcasting: 1\nI1005 18:06:07.378930 3241 log.go:181] (0xc000219550) (0xc000d08aa0) Stream removed, broadcasting: 3\nI1005 18:06:07.378937 3241 log.go:181] (0xc000219550) (0xc000c86000) Stream removed, broadcasting: 5\n" Oct 5 18:06:07.383: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 18:06:07.383: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 18:06:07.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 18:06:07.670: INFO: stderr: "I1005 18:06:07.521695 3259 log.go:181] (0xc0004f11e0) (0xc0007de780) Create stream\nI1005 18:06:07.521754 3259 log.go:181] (0xc0004f11e0) (0xc0007de780) Stream added, broadcasting: 1\nI1005 18:06:07.524385 3259 log.go:181] (0xc0004f11e0) Reply frame received for 1\nI1005 18:06:07.524442 3259 log.go:181] (0xc0004f11e0) (0xc000cc4000) Create stream\nI1005 18:06:07.524469 3259 log.go:181] (0xc0004f11e0) (0xc000cc4000) Stream added, broadcasting: 3\nI1005 18:06:07.525468 3259 log.go:181] (0xc0004f11e0) Reply frame received for 3\nI1005 18:06:07.525489 3259 log.go:181] (0xc0004f11e0) (0xc0007de820) Create stream\nI1005 18:06:07.525495 3259 log.go:181] (0xc0004f11e0) (0xc0007de820) Stream added, broadcasting: 5\nI1005 18:06:07.526435 3259 log.go:181] (0xc0004f11e0) Reply frame received for 5\nI1005 18:06:07.595410 3259 log.go:181] (0xc0004f11e0) Data frame received for 5\nI1005 18:06:07.595454 3259 log.go:181] (0xc0007de820) (5) Data frame handling\nI1005 18:06:07.595479 3259 log.go:181] (0xc0007de820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 18:06:07.659905 3259 log.go:181] (0xc0004f11e0) Data frame received for 5\nI1005 18:06:07.659968 3259 log.go:181] (0xc0007de820) (5) Data frame handling\nI1005 18:06:07.660027 3259 log.go:181] (0xc0004f11e0) Data frame received for 3\nI1005 18:06:07.660051 3259 log.go:181] (0xc000cc4000) (3) Data frame handling\nI1005 18:06:07.660072 3259 log.go:181] (0xc000cc4000) (3) Data frame sent\nI1005 18:06:07.660092 3259 log.go:181] (0xc0004f11e0) Data frame received for 3\nI1005 18:06:07.660110 3259 log.go:181] (0xc000cc4000) (3) Data frame handling\nI1005 18:06:07.662380 3259 log.go:181] (0xc0004f11e0) Data frame received for 1\nI1005 18:06:07.662418 3259 log.go:181] (0xc0007de780) (1) Data frame handling\nI1005 18:06:07.662436 3259 log.go:181] (0xc0007de780) (1) Data frame sent\nI1005 18:06:07.662450 3259 log.go:181] (0xc0004f11e0) (0xc0007de780) Stream removed, broadcasting: 1\nI1005 18:06:07.662507 3259 log.go:181] (0xc0004f11e0) Go away received\nI1005 18:06:07.662839 3259 log.go:181] (0xc0004f11e0) (0xc0007de780) Stream removed, broadcasting: 1\nI1005 18:06:07.662864 3259 log.go:181] (0xc0004f11e0) (0xc000cc4000) Stream removed, broadcasting: 3\nI1005 18:06:07.662880 3259 log.go:181] (0xc0004f11e0) (0xc0007de820) Stream removed, broadcasting: 5\n" Oct 5 18:06:07.670: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 18:06:07.670: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 18:06:07.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 18:06:07.885: INFO: stderr: "I1005 18:06:07.794584 3277 log.go:181] (0xc000ddb080) (0xc00043d900) Create stream\nI1005 18:06:07.794637 3277 log.go:181] (0xc000ddb080) (0xc00043d900) Stream added, broadcasting: 1\nI1005 18:06:07.799353 3277 log.go:181] (0xc000ddb080) Reply frame received for 1\nI1005 18:06:07.799387 3277 log.go:181] (0xc000ddb080) (0xc0008541e0) Create stream\nI1005 18:06:07.799395 3277 log.go:181] (0xc000ddb080) (0xc0008541e0) Stream added, broadcasting: 3\nI1005 18:06:07.799958 3277 log.go:181] (0xc000ddb080) Reply frame received for 3\nI1005 18:06:07.799979 3277 log.go:181] (0xc000ddb080) (0xc000854280) Create stream\nI1005 18:06:07.799985 3277 log.go:181] (0xc000ddb080) (0xc000854280) Stream added, broadcasting: 5\nI1005 18:06:07.800602 3277 log.go:181] (0xc000ddb080) Reply frame received for 5\nI1005 18:06:07.847316 3277 log.go:181] (0xc000ddb080) Data frame received for 5\nI1005 18:06:07.847351 3277 log.go:181] (0xc000854280) (5) Data frame handling\nI1005 18:06:07.847367 3277 log.go:181] (0xc000854280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 18:06:07.878770 3277 log.go:181] (0xc000ddb080) Data frame received for 5\nI1005 18:06:07.878810 3277 log.go:181] (0xc000854280) (5) Data frame handling\nI1005 18:06:07.878839 3277 log.go:181] (0xc000ddb080) Data frame received for 3\nI1005 18:06:07.878865 3277 log.go:181] (0xc0008541e0) (3) Data frame handling\nI1005 18:06:07.878886 3277 log.go:181] (0xc0008541e0) (3) Data frame sent\nI1005 18:06:07.878906 3277 log.go:181] (0xc000ddb080) Data frame received for 3\nI1005 18:06:07.878917 3277 log.go:181] (0xc0008541e0) (3) Data frame handling\nI1005 18:06:07.880335 3277 log.go:181] (0xc000ddb080) Data frame received for 1\nI1005 18:06:07.880360 3277 log.go:181] (0xc00043d900) (1) Data frame handling\nI1005 18:06:07.880378 3277 log.go:181] (0xc00043d900) (1) Data frame sent\nI1005 18:06:07.880386 3277 log.go:181] (0xc000ddb080) (0xc00043d900) Stream removed, broadcasting: 1\nI1005 18:06:07.880685 3277 log.go:181] (0xc000ddb080) (0xc00043d900) Stream removed, broadcasting: 1\nI1005 18:06:07.880700 3277 log.go:181] (0xc000ddb080) (0xc0008541e0) Stream removed, broadcasting: 3\nI1005 18:06:07.880706 3277 log.go:181] (0xc000ddb080) (0xc000854280) Stream removed, broadcasting: 5\n" Oct 5 18:06:07.885: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 18:06:07.885: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 18:06:07.885: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 18:06:07.889: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 5 18:06:17.899: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 18:06:17.899: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 5 18:06:17.899: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 5 18:06:17.936: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:17.936: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:17.936: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:17.936: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:17.936: INFO: Oct 5 18:06:17.936: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:18.940: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:18.940: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:18.940: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:18.941: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:18.941: INFO: Oct 5 18:06:18.941: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:19.946: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:19.946: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:19.946: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:19.946: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:19.946: INFO: Oct 5 18:06:19.946: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:20.952: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:20.952: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:20.952: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:20.952: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:20.952: INFO: Oct 5 18:06:20.952: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:21.958: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:21.958: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:21.958: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:21.958: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:21.958: INFO: Oct 5 18:06:21.958: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:22.963: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:22.963: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:22.963: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:22.963: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:22.963: INFO: Oct 5 18:06:22.963: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:23.969: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:23.969: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:23.969: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:23.969: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:23.969: INFO: Oct 5 18:06:23.969: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:24.974: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:24.974: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:24.974: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:24.974: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:24.974: INFO: Oct 5 18:06:24.974: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:25.981: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:25.981: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:25.981: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:25.981: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:25.981: INFO: Oct 5 18:06:25.981: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 18:06:26.986: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 18:06:26.986: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:22 +0000 UTC }] Oct 5 18:06:26.986: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:26.986: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:06:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 18:05:46 +0000 UTC }] Oct 5 18:06:26.986: INFO: Oct 5 18:06:26.986: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3200 Oct 5 18:06:27.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:06:28.134: INFO: rc: 1 Oct 5 18:06:28.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 5 18:06:38.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:06:38.243: INFO: rc: 1 Oct 5 18:06:38.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:06:48.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:06:48.345: INFO: rc: 1 Oct 5 18:06:48.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:06:58.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:06:58.449: INFO: rc: 1 Oct 5 18:06:58.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:08.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:08.575: INFO: rc: 1 Oct 5 18:07:08.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:18.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:18.677: INFO: rc: 1 Oct 5 18:07:18.677: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:28.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:28.788: INFO: rc: 1 Oct 5 18:07:28.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:38.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:38.897: INFO: rc: 1 Oct 5 18:07:38.897: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:48.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:49.004: INFO: rc: 1 Oct 5 18:07:49.004: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:07:59.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:07:59.124: INFO: rc: 1 Oct 5 18:07:59.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:09.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:09.233: INFO: rc: 1 Oct 5 18:08:09.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:19.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:19.356: INFO: rc: 1 Oct 5 18:08:19.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:29.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:29.464: INFO: rc: 1 Oct 5 18:08:29.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:39.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:39.567: INFO: rc: 1 Oct 5 18:08:39.567: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:49.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:49.672: INFO: rc: 1 Oct 5 18:08:49.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:08:59.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:08:59.781: INFO: rc: 1 Oct 5 18:08:59.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:09:09.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:09:09.881: INFO: rc: 1 Oct 5 18:09:09.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:09:19.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:09:19.985: INFO: rc: 1 Oct 5 18:09:19.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:09:29.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:09:30.086: INFO: rc: 1 Oct 5 18:09:30.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:09:40.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:09:40.202: INFO: rc: 1 Oct 5 18:09:40.202: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:09:50.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:09:50.302: INFO: rc: 1 Oct 5 18:09:50.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:00.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:00.403: INFO: rc: 1 Oct 5 18:10:00.403: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:10.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:10.512: INFO: rc: 1 Oct 5 18:10:10.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:20.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:20.698: INFO: rc: 1 Oct 5 18:10:20.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:30.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:30.804: INFO: rc: 1 Oct 5 18:10:30.804: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:40.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:40.911: INFO: rc: 1 Oct 5 18:10:40.911: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:10:50.911: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:10:52.462: INFO: rc: 1 Oct 5 18:10:52.462: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:11:02.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:11:02.568: INFO: rc: 1 Oct 5 18:11:02.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:11:12.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:11:12.687: INFO: rc: 1 Oct 5 18:11:12.687: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:11:22.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:11:22.801: INFO: rc: 1 Oct 5 18:11:22.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Oct 5 18:11:32.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3200 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 18:11:32.897: INFO: rc: 1 Oct 5 18:11:32.897: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Oct 5 18:11:32.897: INFO: Scaling statefulset ss to 0 Oct 5 18:11:32.906: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 18:11:32.908: INFO: Deleting all statefulset in ns statefulset-3200 Oct 5 18:11:32.910: INFO: Scaling statefulset ss to 0 Oct 5 18:11:32.918: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 18:11:32.920: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:11:32.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3200" for this suite. • [SLOW TEST:371.122 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":255,"skipped":4267,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:11:32.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 5 18:11:33.007: INFO: Waiting up to 5m0s for pod "pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c" in namespace "emptydir-5830" to be "Succeeded or Failed" Oct 5 18:11:33.066: INFO: Pod "pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.686499ms Oct 5 18:11:35.071: INFO: Pod "pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064846004s Oct 5 18:11:37.084: INFO: Pod "pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077317254s STEP: Saw pod success Oct 5 18:11:37.084: INFO: Pod "pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c" satisfied condition "Succeeded or Failed" Oct 5 18:11:37.087: INFO: Trying to get logs from node latest-worker pod pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c container test-container: STEP: delete the pod Oct 5 18:11:37.127: INFO: Waiting for pod pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c to disappear Oct 5 18:11:37.136: INFO: Pod pod-9bade229-3e78-4372-b9ca-8e338c8f5c5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:11:37.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5830" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4267,"failed":0} SS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:11:37.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 18:11:37.565: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 5 18:11:37.568: INFO: starting watch STEP: patching STEP: updating Oct 5 18:11:37.631: INFO: waiting for watch events with expected annotations Oct 5 18:11:37.631: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:11:37.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-647" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":257,"skipped":4269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:11:37.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 18:11:37.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4225' Oct 5 18:11:37.943: INFO: stderr: "" Oct 5 18:11:37.943: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 5 18:11:37.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-4225' Oct 5 18:11:38.074: INFO: stderr: "" Oct 5 18:11:38.074: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-05T18:11:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T18:11:37Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T18:11:38Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4225\",\n \"resourceVersion\": \"3417595\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4225/pods/e2e-test-httpd-pod\",\n \"uid\": \"de3dcad9-d559-4c19-b133-2020b10af073\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wh54v\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wh54v\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wh54v\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T18:11:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T18:11:37Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T18:11:37Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T18:11:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.16\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-05T18:11:37Z\"\n }\n}\n" Oct 5 18:11:38.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-4225' Oct 5 18:11:38.443: INFO: stderr: "W1005 18:11:38.139373 3891 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Oct 5 18:11:38.443: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Oct 5 18:11:38.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4225' Oct 5 18:11:40.521: INFO: stderr: "" Oct 5 18:11:40.521: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:11:40.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4225" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":258,"skipped":4292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:11:40.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 18:11:40.714: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 18:12:40.739: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 5 18:12:40.793: INFO: Created pod: pod0-sched-preemption-low-priority Oct 5 18:12:40.895: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:13:15.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4945" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:94.572 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":259,"skipped":4316,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:13:15.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 18:13:16.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 18:13:18.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518395, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:13:20.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518396, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518395, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 18:13:23.132: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 5 18:13:27.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config attach --namespace=webhook-1568 to-be-attached-pod -i -c=container1' Oct 5 18:13:27.380: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:13:27.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1568" for this suite. STEP: Destroying namespace "webhook-1568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":260,"skipped":4322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:13:27.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 18:13:27.707: INFO: Waiting up to 5m0s for pod "downward-api-70ede92a-17a8-4edc-975f-5783d713537d" in namespace "downward-api-1540" to be "Succeeded or Failed" Oct 5 18:13:27.749: INFO: Pod "downward-api-70ede92a-17a8-4edc-975f-5783d713537d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.149469ms Oct 5 18:13:29.753: INFO: Pod "downward-api-70ede92a-17a8-4edc-975f-5783d713537d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045999858s Oct 5 18:13:31.758: INFO: Pod "downward-api-70ede92a-17a8-4edc-975f-5783d713537d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050558143s STEP: Saw pod success Oct 5 18:13:31.758: INFO: Pod "downward-api-70ede92a-17a8-4edc-975f-5783d713537d" satisfied condition "Succeeded or Failed" Oct 5 18:13:31.761: INFO: Trying to get logs from node latest-worker pod downward-api-70ede92a-17a8-4edc-975f-5783d713537d container dapi-container: STEP: delete the pod Oct 5 18:13:31.841: INFO: Waiting for pod downward-api-70ede92a-17a8-4edc-975f-5783d713537d to disappear Oct 5 18:13:31.853: INFO: Pod downward-api-70ede92a-17a8-4edc-975f-5783d713537d no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:13:31.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1540" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4355,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:13:31.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 5 18:13:31.915: INFO: Waiting up to 5m0s for pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679" in namespace "emptydir-2531" to be "Succeeded or Failed" Oct 5 18:13:31.941: INFO: Pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679": Phase="Pending", Reason="", readiness=false. Elapsed: 26.288552ms Oct 5 18:13:33.978: INFO: Pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063509296s Oct 5 18:13:35.982: INFO: Pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066868053s Oct 5 18:13:37.987: INFO: Pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072030011s STEP: Saw pod success Oct 5 18:13:37.987: INFO: Pod "pod-47083075-a1bc-44a9-be24-d2f8ba975679" satisfied condition "Succeeded or Failed" Oct 5 18:13:37.990: INFO: Trying to get logs from node latest-worker2 pod pod-47083075-a1bc-44a9-be24-d2f8ba975679 container test-container: STEP: delete the pod Oct 5 18:13:38.063: INFO: Waiting for pod pod-47083075-a1bc-44a9-be24-d2f8ba975679 to disappear Oct 5 18:13:38.069: INFO: Pod pod-47083075-a1bc-44a9-be24-d2f8ba975679 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:13:38.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2531" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4355,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:13:38.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1005 18:13:48.202488 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 18:14:50.218: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:14:50.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6979" for this suite. • [SLOW TEST:72.152 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":263,"skipped":4372,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:14:50.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:16:50.378: INFO: Deleting pod "var-expansion-396d5d5b-1485-49e0-8dfb-eefcfc716fba" in namespace "var-expansion-8443" Oct 5 18:16:50.394: INFO: Wait up to 5m0s for pod "var-expansion-396d5d5b-1485-49e0-8dfb-eefcfc716fba" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:16:54.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8443" for this suite. • [SLOW TEST:124.193 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":264,"skipped":4385,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:16:54.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-794.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-794.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-794.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-794.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-794.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 18:17:00.667: INFO: DNS probes using dns-794/dns-test-6a040ede-fdb7-40d0-81ea-2d1be0a77121 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:00.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-794" for this suite. • [SLOW TEST:6.405 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":265,"skipped":4388,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:00.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 5 18:17:01.380: INFO: Waiting up to 5m0s for pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18" in namespace "emptydir-3151" to be "Succeeded or Failed" Oct 5 18:17:01.440: INFO: Pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18": Phase="Pending", Reason="", readiness=false. Elapsed: 59.976801ms Oct 5 18:17:03.444: INFO: Pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064115445s Oct 5 18:17:05.448: INFO: Pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068033253s Oct 5 18:17:07.453: INFO: Pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072599413s STEP: Saw pod success Oct 5 18:17:07.453: INFO: Pod "pod-72e54d56-2161-4465-b3b6-a81124bf6f18" satisfied condition "Succeeded or Failed" Oct 5 18:17:07.455: INFO: Trying to get logs from node latest-worker pod pod-72e54d56-2161-4465-b3b6-a81124bf6f18 container test-container: STEP: delete the pod Oct 5 18:17:09.916: INFO: Waiting for pod pod-72e54d56-2161-4465-b3b6-a81124bf6f18 to disappear Oct 5 18:17:09.977: INFO: Pod pod-72e54d56-2161-4465-b3b6-a81124bf6f18 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:09.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3151" for this suite. • [SLOW TEST:9.316 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4398,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:10.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9783 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[] Oct 5 18:17:10.289: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Oct 5 18:17:11.300: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9783 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod1:[100]] Oct 5 18:17:14.424: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9783 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod1:[100] pod2:[101]] Oct 5 18:17:18.500: INFO: Unexpected endpoints: found map[c99640f4-a9fb-4370-afb9-e2a071646c25:[100]], expected map[pod1:[100] pod2:[101]], will retry Oct 5 18:17:19.505: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9783 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[pod2:[101]] Oct 5 18:17:19.572: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9783 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9783 to expose endpoints map[] Oct 5 18:17:20.641: INFO: successfully validated that service multi-endpoint-test in namespace services-9783 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:20.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9783" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.828 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":267,"skipped":4404,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:20.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Oct 5 18:17:21.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config api-versions' Oct 5 18:17:21.415: INFO: stderr: "" Oct 5 18:17:21.415: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:21.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6195" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":268,"skipped":4410,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:21.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-5c50024b-475f-451e-a062-b03c4507d26f STEP: Creating a pod to test consume configMaps Oct 5 18:17:21.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e" in namespace "configmap-4672" to be "Succeeded or Failed" Oct 5 18:17:21.671: INFO: Pod "pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915754ms Oct 5 18:17:23.675: INFO: Pod "pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007324054s Oct 5 18:17:25.680: INFO: Pod "pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012387708s STEP: Saw pod success Oct 5 18:17:25.680: INFO: Pod "pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e" satisfied condition "Succeeded or Failed" Oct 5 18:17:25.684: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e container configmap-volume-test: STEP: delete the pod Oct 5 18:17:25.734: INFO: Waiting for pod pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e to disappear Oct 5 18:17:25.755: INFO: Pod pod-configmaps-b4395e2b-426d-494a-a600-14faf512d31e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4672" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:25.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 18:17:25.798: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 18:17:25.807: INFO: Waiting for terminating namespaces to be deleted... Oct 5 18:17:25.809: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 5 18:17:25.814: INFO: kindnet-9tmlz from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:17:25.814: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 18:17:25.814: INFO: kube-proxy-fk9hq from kube-system started at 2020-09-23 08:30:39 +0000 UTC (1 container statuses recorded) Oct 5 18:17:25.814: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 18:17:25.814: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 5 18:17:25.819: INFO: kindnet-z6tnh from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:17:25.819: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 18:17:25.819: INFO: kube-proxy-whjz5 from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:17:25.819: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-78f7de4c-4d66-4253-93c1-103804ae47de 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-78f7de4c-4d66-4253-93c1-103804ae47de off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-78f7de4c-4d66-4253-93c1-103804ae47de [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:33.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":270,"skipped":4444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:33.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5b968e0b-a379-4b9d-9a00-ffa2ff7c3600 STEP: Creating a pod to test consume configMaps Oct 5 18:17:34.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40" in namespace "projected-7352" to be "Succeeded or Failed" Oct 5 18:17:34.116: INFO: Pod "pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40": Phase="Pending", Reason="", readiness=false. Elapsed: 39.533567ms Oct 5 18:17:36.119: INFO: Pod "pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043245854s Oct 5 18:17:38.124: INFO: Pod "pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048397212s STEP: Saw pod success Oct 5 18:17:38.125: INFO: Pod "pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40" satisfied condition "Succeeded or Failed" Oct 5 18:17:38.127: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40 container projected-configmap-volume-test: STEP: delete the pod Oct 5 18:17:38.160: INFO: Waiting for pod pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40 to disappear Oct 5 18:17:38.170: INFO: Pod pod-projected-configmaps-1e2c814f-5505-4b72-8cab-433b866abb40 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:38.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7352" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4479,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:17:38.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9803 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9803 STEP: creating replication controller externalsvc in namespace services-9803 I1005 18:17:38.362956 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9803, replica count: 2 I1005 18:17:41.413350 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 18:17:44.413537 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 5 18:17:44.494: INFO: Creating new exec pod Oct 5 18:17:48.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-9803 execpod8rc9w -- /bin/sh -x -c nslookup clusterip-service.services-9803.svc.cluster.local' Oct 5 18:17:51.719: INFO: stderr: "I1005 18:17:51.609526 3964 log.go:181] (0xc00003a0b0) (0xc000990c80) Create stream\nI1005 18:17:51.609585 3964 log.go:181] (0xc00003a0b0) (0xc000990c80) Stream added, broadcasting: 1\nI1005 18:17:51.613318 3964 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1005 18:17:51.613387 3964 log.go:181] (0xc00003a0b0) (0xc000bc8000) Create stream\nI1005 18:17:51.613416 3964 log.go:181] (0xc00003a0b0) (0xc000bc8000) Stream added, broadcasting: 3\nI1005 18:17:51.614496 3964 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1005 18:17:51.614556 3964 log.go:181] (0xc00003a0b0) (0xc000c04000) Create stream\nI1005 18:17:51.614571 3964 log.go:181] (0xc00003a0b0) (0xc000c04000) Stream added, broadcasting: 5\nI1005 18:17:51.615693 3964 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1005 18:17:51.703814 3964 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 18:17:51.703843 3964 log.go:181] (0xc000c04000) (5) Data frame handling\nI1005 18:17:51.703862 3964 log.go:181] (0xc000c04000) (5) Data frame sent\n+ nslookup clusterip-service.services-9803.svc.cluster.local\nI1005 18:17:51.709694 3964 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 18:17:51.709706 3964 log.go:181] (0xc000bc8000) (3) Data frame handling\nI1005 18:17:51.709713 3964 log.go:181] (0xc000bc8000) (3) Data frame sent\nI1005 18:17:51.710660 3964 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 18:17:51.710674 3964 log.go:181] (0xc000bc8000) (3) Data frame handling\nI1005 18:17:51.710681 3964 log.go:181] (0xc000bc8000) (3) Data frame sent\nI1005 18:17:51.710945 3964 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 18:17:51.710955 3964 log.go:181] (0xc000c04000) (5) Data frame handling\nI1005 18:17:51.711168 3964 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 18:17:51.711179 3964 log.go:181] (0xc000bc8000) (3) Data frame handling\nI1005 18:17:51.713444 3964 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1005 18:17:51.713467 3964 log.go:181] (0xc000990c80) (1) Data frame handling\nI1005 18:17:51.713482 3964 log.go:181] (0xc000990c80) (1) Data frame sent\nI1005 18:17:51.713507 3964 log.go:181] (0xc00003a0b0) (0xc000990c80) Stream removed, broadcasting: 1\nI1005 18:17:51.713594 3964 log.go:181] (0xc00003a0b0) Go away received\nI1005 18:17:51.713918 3964 log.go:181] (0xc00003a0b0) (0xc000990c80) Stream removed, broadcasting: 1\nI1005 18:17:51.713939 3964 log.go:181] (0xc00003a0b0) (0xc000bc8000) Stream removed, broadcasting: 3\nI1005 18:17:51.713952 3964 log.go:181] (0xc00003a0b0) (0xc000c04000) Stream removed, broadcasting: 5\n" Oct 5 18:17:51.719: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9803.svc.cluster.local\tcanonical name = externalsvc.services-9803.svc.cluster.local.\nName:\texternalsvc.services-9803.svc.cluster.local\nAddress: 10.105.79.165\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9803, will wait for the garbage collector to delete the pods Oct 5 18:17:51.789: INFO: Deleting ReplicationController externalsvc took: 16.557634ms Oct 5 18:17:52.190: INFO: Terminating ReplicationController externalsvc pods took: 400.167505ms Oct 5 18:17:59.982: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:17:59.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9803" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.874 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":272,"skipped":4492,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:18:00.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-0ab0c091-a4fe-4e1d-8006-2697bbc8c421 STEP: Creating a pod to test consume secrets Oct 5 18:18:00.173: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec" in namespace "projected-2920" to be "Succeeded or Failed" Oct 5 18:18:00.206: INFO: Pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec": Phase="Pending", Reason="", readiness=false. Elapsed: 32.29808ms Oct 5 18:18:02.267: INFO: Pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093402525s Oct 5 18:18:04.371: INFO: Pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198047656s Oct 5 18:18:06.375: INFO: Pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.201301334s STEP: Saw pod success Oct 5 18:18:06.375: INFO: Pod "pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec" satisfied condition "Succeeded or Failed" Oct 5 18:18:06.378: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec container projected-secret-volume-test: STEP: delete the pod Oct 5 18:18:06.818: INFO: Waiting for pod pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec to disappear Oct 5 18:18:06.901: INFO: Pod pod-projected-secrets-66be8664-703f-4588-b054-a2c7135c3cec no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:18:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2920" for this suite. • [SLOW TEST:6.881 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4506,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:18:06.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 18:18:07.704: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 18:18:09.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518687, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518687, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518687, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518687, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 18:18:12.750: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:18:13.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-136" for this suite. STEP: Destroying namespace "webhook-136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.515 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":274,"skipped":4524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:18:13.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 18:18:14.334: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 18:18:16.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:18:18.347: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518694, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 18:18:21.376: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:18:21.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4859-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:18:22.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3591" for this suite. STEP: Destroying namespace "webhook-3591-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.483 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":275,"skipped":4559,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:18:22.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:18:23.294: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 5 18:18:23.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:23.349: INFO: Number of nodes with available pods: 0 Oct 5 18:18:23.349: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:24.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:24.364: INFO: Number of nodes with available pods: 0 Oct 5 18:18:24.364: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:25.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:25.363: INFO: Number of nodes with available pods: 0 Oct 5 18:18:25.363: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:26.663: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:26.667: INFO: Number of nodes with available pods: 0 Oct 5 18:18:26.667: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:27.408: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:27.411: INFO: Number of nodes with available pods: 0 Oct 5 18:18:27.411: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:28.356: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:28.360: INFO: Number of nodes with available pods: 1 Oct 5 18:18:28.360: INFO: Node latest-worker is running more than one daemon pod Oct 5 18:18:29.359: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:29.365: INFO: Number of nodes with available pods: 2 Oct 5 18:18:29.365: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 5 18:18:29.435: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:29.435: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:29.457: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:30.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:30.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:30.468: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:31.531: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:31.531: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:31.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:32.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:32.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:32.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:33.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:33.462: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:33.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:33.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:34.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:34.462: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:34.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:34.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:35.471: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:35.471: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:35.471: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:35.475: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:36.463: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:36.463: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:36.463: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:36.468: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:37.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:37.462: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:37.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:37.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:38.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:38.462: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:38.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:38.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:39.462: INFO: Wrong image for pod: daemon-set-9npqf. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:39.462: INFO: Pod daemon-set-9npqf is not available Oct 5 18:18:39.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:39.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:40.462: INFO: Pod daemon-set-whbrc is not available Oct 5 18:18:40.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:40.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:41.537: INFO: Pod daemon-set-whbrc is not available Oct 5 18:18:41.537: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:41.540: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:42.462: INFO: Pod daemon-set-whbrc is not available Oct 5 18:18:42.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:42.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:43.461: INFO: Pod daemon-set-whbrc is not available Oct 5 18:18:43.461: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:43.465: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:44.525: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:44.531: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:45.462: INFO: Wrong image for pod: daemon-set-wmrss. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 18:18:45.462: INFO: Pod daemon-set-wmrss is not available Oct 5 18:18:45.467: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:46.463: INFO: Pod daemon-set-tg856 is not available Oct 5 18:18:46.517: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 5 18:18:46.522: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:46.525: INFO: Number of nodes with available pods: 1 Oct 5 18:18:46.525: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:18:47.532: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:47.536: INFO: Number of nodes with available pods: 1 Oct 5 18:18:47.536: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:18:48.537: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:48.540: INFO: Number of nodes with available pods: 1 Oct 5 18:18:48.540: INFO: Node latest-worker2 is running more than one daemon pod Oct 5 18:18:49.531: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 18:18:49.535: INFO: Number of nodes with available pods: 2 Oct 5 18:18:49.535: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6826, will wait for the garbage collector to delete the pods Oct 5 18:18:49.609: INFO: Deleting DaemonSet.extensions daemon-set took: 5.35887ms Oct 5 18:18:50.009: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.215017ms Oct 5 18:18:59.913: INFO: Number of nodes with available pods: 0 Oct 5 18:18:59.913: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 18:18:59.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6826/daemonsets","resourceVersion":"3419841"},"items":null} Oct 5 18:18:59.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6826/pods","resourceVersion":"3419841"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:18:59.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6826" for this suite. • [SLOW TEST:37.002 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":276,"skipped":4561,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:18:59.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-9e51aad9-e567-492e-985f-28a204a0249c in namespace container-probe-4857 Oct 5 18:19:04.065: INFO: Started pod test-webserver-9e51aad9-e567-492e-985f-28a204a0249c in namespace container-probe-4857 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 18:19:04.069: INFO: Initial restart count of pod test-webserver-9e51aad9-e567-492e-985f-28a204a0249c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:04.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4857" for this suite. • [SLOW TEST:244.352 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:04.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 18:23:05.800: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 18:23:07.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518985, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:23:09.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518986, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737518985, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 18:23:12.846: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:23:12.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3303-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:14.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6249" for this suite. STEP: Destroying namespace "webhook-6249-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.906 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":278,"skipped":4602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:14.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 18:23:14.244: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:23.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1206" for this suite. • [SLOW TEST:9.555 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":279,"skipped":4638,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:23.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 5 18:23:29.924: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6854 PodName:pod-sharedvolume-5434ae1d-57b7-43ba-98e5-0e6e8c8e47d7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 18:23:29.924: INFO: >>> kubeConfig: /root/.kube/config I1005 18:23:29.960943 7 log.go:181] (0xc00331e790) (0xc0023cd040) Create stream I1005 18:23:29.960977 7 log.go:181] (0xc00331e790) (0xc0023cd040) Stream added, broadcasting: 1 I1005 18:23:29.965099 7 log.go:181] (0xc00331e790) Reply frame received for 1 I1005 18:23:29.965156 7 log.go:181] (0xc00331e790) (0xc00250d9a0) Create stream I1005 18:23:29.965173 7 log.go:181] (0xc00331e790) (0xc00250d9a0) Stream added, broadcasting: 3 I1005 18:23:29.966106 7 log.go:181] (0xc00331e790) Reply frame received for 3 I1005 18:23:29.966131 7 log.go:181] (0xc00331e790) (0xc00250da40) Create stream I1005 18:23:29.966141 7 log.go:181] (0xc00331e790) (0xc00250da40) Stream added, broadcasting: 5 I1005 18:23:29.967114 7 log.go:181] (0xc00331e790) Reply frame received for 5 I1005 18:23:30.026336 7 log.go:181] (0xc00331e790) Data frame received for 5 I1005 18:23:30.026363 7 log.go:181] (0xc00250da40) (5) Data frame handling I1005 18:23:30.026381 7 log.go:181] (0xc00331e790) Data frame received for 3 I1005 18:23:30.026385 7 log.go:181] (0xc00250d9a0) (3) Data frame handling I1005 18:23:30.026392 7 log.go:181] (0xc00250d9a0) (3) Data frame sent I1005 18:23:30.026406 7 log.go:181] (0xc00331e790) Data frame received for 3 I1005 18:23:30.026413 7 log.go:181] (0xc00250d9a0) (3) Data frame handling I1005 18:23:30.027677 7 log.go:181] (0xc00331e790) Data frame received for 1 I1005 18:23:30.027712 7 log.go:181] (0xc0023cd040) (1) Data frame handling I1005 18:23:30.027727 7 log.go:181] (0xc0023cd040) (1) Data frame sent I1005 18:23:30.027810 7 log.go:181] (0xc00331e790) (0xc0023cd040) Stream removed, broadcasting: 1 I1005 18:23:30.027843 7 log.go:181] (0xc00331e790) Go away received I1005 18:23:30.027988 7 log.go:181] (0xc00331e790) (0xc0023cd040) Stream removed, broadcasting: 1 I1005 18:23:30.028016 7 log.go:181] (0xc00331e790) (0xc00250d9a0) Stream removed, broadcasting: 3 I1005 18:23:30.028035 7 log.go:181] (0xc00331e790) (0xc00250da40) Stream removed, broadcasting: 5 Oct 5 18:23:30.028: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:30.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6854" for this suite. • [SLOW TEST:6.286 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":280,"skipped":4645,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:30.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-17f34ee8-d472-4eba-8696-cad94a262be5 STEP: Creating a pod to test consume configMaps Oct 5 18:23:30.179: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6" in namespace "projected-4886" to be "Succeeded or Failed" Oct 5 18:23:30.200: INFO: Pod "pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.844027ms Oct 5 18:23:32.204: INFO: Pod "pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024932751s Oct 5 18:23:34.207: INFO: Pod "pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028060876s STEP: Saw pod success Oct 5 18:23:34.207: INFO: Pod "pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6" satisfied condition "Succeeded or Failed" Oct 5 18:23:34.210: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6 container projected-configmap-volume-test: STEP: delete the pod Oct 5 18:23:34.245: INFO: Waiting for pod pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6 to disappear Oct 5 18:23:34.250: INFO: Pod pod-projected-configmaps-78167655-f5b6-4441-8e7c-ccdab261d6a6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:34.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4886" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4651,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:34.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 18:23:34.321: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8136" for this suite. • [SLOW TEST:6.586 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":282,"skipped":4667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:40.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 18:23:40.903: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 18:23:40.909: INFO: Waiting for terminating namespaces to be deleted... Oct 5 18:23:40.911: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 5 18:23:40.915: INFO: kindnet-9tmlz from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:23:40.915: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 18:23:40.915: INFO: kube-proxy-fk9hq from kube-system started at 2020-09-23 08:30:39 +0000 UTC (1 container statuses recorded) Oct 5 18:23:40.915: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 18:23:40.915: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 5 18:23:40.919: INFO: pod-init-46733e55-c5ca-4608-93b4-becb004cb00e from init-container-8136 started at 2020-10-05 18:23:34 +0000 UTC (1 container statuses recorded) Oct 5 18:23:40.919: INFO: Container run1 ready: false, restart count 0 Oct 5 18:23:40.919: INFO: kindnet-z6tnh from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:23:40.919: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 18:23:40.919: INFO: kube-proxy-whjz5 from kube-system started at 2020-09-23 08:30:40 +0000 UTC (1 container statuses recorded) Oct 5 18:23:40.919: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Oct 5 18:23:41.240: INFO: Pod kindnet-9tmlz requesting resource cpu=100m on Node latest-worker Oct 5 18:23:41.240: INFO: Pod kindnet-z6tnh requesting resource cpu=100m on Node latest-worker2 Oct 5 18:23:41.240: INFO: Pod kube-proxy-fk9hq requesting resource cpu=0m on Node latest-worker Oct 5 18:23:41.240: INFO: Pod kube-proxy-whjz5 requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Oct 5 18:23:41.240: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Oct 5 18:23:41.248: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259.163b2bc6d88665cb], Reason = [Started], Message = [Started container filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9.163b2bc691313542], Reason = [Started], Message = [Started container filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9.163b2bc5dc06125b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9907/filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9.163b2bc67d8aa46e], Reason = [Created], Message = [Created container filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9] STEP: Considering event: Type = [Normal], Name = [filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259.163b2bc6c92176b2], Reason = [Created], Message = [Created container filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb7e4be1-18b1-4602-8bff-33a286c09bc9.163b2bc627cce39b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259.163b2bc671fe780e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259.163b2bc5de036829], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9907/filler-pod-9cafb27f-35b5-4b12-be74-5bf208d47259 to latest-worker2] STEP: Considering event: Type = [Warning], Name = [additional-pod.163b2bc749ac9890], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.163b2bc74bd6d8f2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:48.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9907" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.619 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":283,"skipped":4693,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:48.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 18:23:48.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513" in namespace "downward-api-9846" to be "Succeeded or Failed" Oct 5 18:23:48.556: INFO: Pod "downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513": Phase="Pending", Reason="", readiness=false. Elapsed: 7.754505ms Oct 5 18:23:50.565: INFO: Pod "downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016521194s Oct 5 18:23:52.571: INFO: Pod "downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022273782s STEP: Saw pod success Oct 5 18:23:52.571: INFO: Pod "downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513" satisfied condition "Succeeded or Failed" Oct 5 18:23:52.574: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513 container client-container: STEP: delete the pod Oct 5 18:23:53.459: INFO: Waiting for pod downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513 to disappear Oct 5 18:23:53.482: INFO: Pod downwardapi-volume-807b9bfa-e2a7-4ae3-9f41-b0e23d049513 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:53.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9846" for this suite. • [SLOW TEST:5.028 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4703,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:53.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 18:23:53.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1" in namespace "downward-api-4396" to be "Succeeded or Failed" Oct 5 18:23:53.631: INFO: Pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.034126ms Oct 5 18:23:55.635: INFO: Pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02475504s Oct 5 18:23:57.640: INFO: Pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.02988637s Oct 5 18:23:59.645: INFO: Pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03509341s STEP: Saw pod success Oct 5 18:23:59.646: INFO: Pod "downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1" satisfied condition "Succeeded or Failed" Oct 5 18:23:59.649: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1 container client-container: STEP: delete the pod Oct 5 18:23:59.696: INFO: Waiting for pod downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1 to disappear Oct 5 18:23:59.742: INFO: Pod downwardapi-volume-0a1299ef-b041-4725-a18c-1b42a12537f1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:23:59.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4396" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4707,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:23:59.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:24:16.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5266" for this suite. • [SLOW TEST:17.152 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":286,"skipped":4710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:24:16.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:24:21.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8520" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":287,"skipped":4741,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:24:21.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3905.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3905.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3905.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 18:24:27.226: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.230: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.233: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.237: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.247: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.250: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.254: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.257: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:27.264: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:32.269: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.273: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.276: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.279: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.287: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.290: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.293: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.297: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:32.302: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:37.269: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.273: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.277: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.280: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.289: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.293: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.296: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.299: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:37.306: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:42.570: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.574: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.577: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.580: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.587: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.631: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.633: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:42.644: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:47.269: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.276: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.278: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.280: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.286: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.288: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.290: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.292: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:47.296: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:52.269: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.273: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.299: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.302: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.311: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.315: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.318: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.321: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local from pod dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd: the server could not find the requested resource (get pods dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd) Oct 5 18:24:52.326: INFO: Lookups using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3905.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3905.svc.cluster.local jessie_udp@dns-test-service-2.dns-3905.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3905.svc.cluster.local] Oct 5 18:24:57.343: INFO: DNS probes using dns-3905/dns-test-517e82d4-3382-46a9-bf84-bb126d87ddbd succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:24:57.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3905" for this suite. • [SLOW TEST:36.899 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":288,"skipped":4749,"failed":0} [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:24:58.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f474643d-1339-45ea-bffe-63fc9be03190 STEP: Creating secret with name s-test-opt-upd-e4fc9251-a88f-46c0-a2c7-3320a9da2615 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f474643d-1339-45ea-bffe-63fc9be03190 STEP: Updating secret s-test-opt-upd-e4fc9251-a88f-46c0-a2c7-3320a9da2615 STEP: Creating secret with name s-test-opt-create-56436538-ea1a-4bd7-b0fb-f3d7013086fb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:08.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9892" for this suite. • [SLOW TEST:10.285 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4749,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:08.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:24.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9016" for this suite. • [SLOW TEST:16.301 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":290,"skipped":4751,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:24.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 5 18:25:24.690: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 5 18:25:29.725: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9539" for this suite. • [SLOW TEST:5.293 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":291,"skipped":4766,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:29.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-1278 STEP: creating replication controller nodeport-test in namespace services-1278 I1005 18:25:30.146712 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1278, replica count: 2 I1005 18:25:33.197240 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 18:25:36.197501 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 18:25:36.197: INFO: Creating new exec pod Oct 5 18:25:41.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1278 execpodgs4t4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Oct 5 18:25:41.486: INFO: stderr: "I1005 18:25:41.388286 3982 log.go:181] (0xc000f15080) (0xc000f82640) Create stream\nI1005 18:25:41.388360 3982 log.go:181] (0xc000f15080) (0xc000f82640) Stream added, broadcasting: 1\nI1005 18:25:41.392792 3982 log.go:181] (0xc000f15080) Reply frame received for 1\nI1005 18:25:41.392984 3982 log.go:181] (0xc000f15080) (0xc000f0c1e0) Create stream\nI1005 18:25:41.393025 3982 log.go:181] (0xc000f15080) (0xc000f0c1e0) Stream added, broadcasting: 3\nI1005 18:25:41.394040 3982 log.go:181] (0xc000f15080) Reply frame received for 3\nI1005 18:25:41.394084 3982 log.go:181] (0xc000f15080) (0xc000f82000) Create stream\nI1005 18:25:41.394095 3982 log.go:181] (0xc000f15080) (0xc000f82000) Stream added, broadcasting: 5\nI1005 18:25:41.394888 3982 log.go:181] (0xc000f15080) Reply frame received for 5\nI1005 18:25:41.478369 3982 log.go:181] (0xc000f15080) Data frame received for 5\nI1005 18:25:41.478396 3982 log.go:181] (0xc000f82000) (5) Data frame handling\nI1005 18:25:41.478408 3982 log.go:181] (0xc000f82000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI1005 18:25:41.478674 3982 log.go:181] (0xc000f15080) Data frame received for 5\nI1005 18:25:41.478684 3982 log.go:181] (0xc000f82000) (5) Data frame handling\nI1005 18:25:41.478694 3982 log.go:181] (0xc000f82000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1005 18:25:41.479060 3982 log.go:181] (0xc000f15080) Data frame received for 3\nI1005 18:25:41.479077 3982 log.go:181] (0xc000f0c1e0) (3) Data frame handling\nI1005 18:25:41.479218 3982 log.go:181] (0xc000f15080) Data frame received for 5\nI1005 18:25:41.479248 3982 log.go:181] (0xc000f82000) (5) Data frame handling\nI1005 18:25:41.481315 3982 log.go:181] (0xc000f15080) Data frame received for 1\nI1005 18:25:41.481354 3982 log.go:181] (0xc000f82640) (1) Data frame handling\nI1005 18:25:41.481381 3982 log.go:181] (0xc000f82640) (1) Data frame sent\nI1005 18:25:41.481401 3982 log.go:181] (0xc000f15080) (0xc000f82640) Stream removed, broadcasting: 1\nI1005 18:25:41.481432 3982 log.go:181] (0xc000f15080) Go away received\nI1005 18:25:41.481818 3982 log.go:181] (0xc000f15080) (0xc000f82640) Stream removed, broadcasting: 1\nI1005 18:25:41.481840 3982 log.go:181] (0xc000f15080) (0xc000f0c1e0) Stream removed, broadcasting: 3\nI1005 18:25:41.481849 3982 log.go:181] (0xc000f15080) (0xc000f82000) Stream removed, broadcasting: 5\n" Oct 5 18:25:41.486: INFO: stdout: "" Oct 5 18:25:41.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1278 execpodgs4t4 -- /bin/sh -x -c nc -zv -t -w 2 10.98.157.137 80' Oct 5 18:25:41.692: INFO: stderr: "I1005 18:25:41.612672 4000 log.go:181] (0xc00023f600) (0xc0005d28c0) Create stream\nI1005 18:25:41.612718 4000 log.go:181] (0xc00023f600) (0xc0005d28c0) Stream added, broadcasting: 1\nI1005 18:25:41.620950 4000 log.go:181] (0xc00023f600) Reply frame received for 1\nI1005 18:25:41.620991 4000 log.go:181] (0xc00023f600) (0xc0005d2000) Create stream\nI1005 18:25:41.621001 4000 log.go:181] (0xc00023f600) (0xc0005d2000) Stream added, broadcasting: 3\nI1005 18:25:41.622861 4000 log.go:181] (0xc00023f600) Reply frame received for 3\nI1005 18:25:41.622883 4000 log.go:181] (0xc00023f600) (0xc0005ba140) Create stream\nI1005 18:25:41.622891 4000 log.go:181] (0xc00023f600) (0xc0005ba140) Stream added, broadcasting: 5\nI1005 18:25:41.623757 4000 log.go:181] (0xc00023f600) Reply frame received for 5\nI1005 18:25:41.685928 4000 log.go:181] (0xc00023f600) Data frame received for 5\nI1005 18:25:41.685964 4000 log.go:181] (0xc0005ba140) (5) Data frame handling\nI1005 18:25:41.685973 4000 log.go:181] (0xc0005ba140) (5) Data frame sent\nI1005 18:25:41.685981 4000 log.go:181] (0xc00023f600) Data frame received for 5\nI1005 18:25:41.685987 4000 log.go:181] (0xc0005ba140) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.157.137 80\nConnection to 10.98.157.137 80 port [tcp/http] succeeded!\nI1005 18:25:41.686014 4000 log.go:181] (0xc00023f600) Data frame received for 3\nI1005 18:25:41.686024 4000 log.go:181] (0xc0005d2000) (3) Data frame handling\nI1005 18:25:41.686834 4000 log.go:181] (0xc00023f600) Data frame received for 1\nI1005 18:25:41.686862 4000 log.go:181] (0xc0005d28c0) (1) Data frame handling\nI1005 18:25:41.686884 4000 log.go:181] (0xc0005d28c0) (1) Data frame sent\nI1005 18:25:41.686902 4000 log.go:181] (0xc00023f600) (0xc0005d28c0) Stream removed, broadcasting: 1\nI1005 18:25:41.686947 4000 log.go:181] (0xc00023f600) Go away received\nI1005 18:25:41.687219 4000 log.go:181] (0xc00023f600) (0xc0005d28c0) Stream removed, broadcasting: 1\nI1005 18:25:41.687231 4000 log.go:181] (0xc00023f600) (0xc0005d2000) Stream removed, broadcasting: 3\nI1005 18:25:41.687237 4000 log.go:181] (0xc00023f600) (0xc0005ba140) Stream removed, broadcasting: 5\n" Oct 5 18:25:41.692: INFO: stdout: "" Oct 5 18:25:41.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1278 execpodgs4t4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30366' Oct 5 18:25:41.919: INFO: stderr: "I1005 18:25:41.840822 4018 log.go:181] (0xc0008c4fd0) (0xc0001f9220) Create stream\nI1005 18:25:41.840950 4018 log.go:181] (0xc0008c4fd0) (0xc0001f9220) Stream added, broadcasting: 1\nI1005 18:25:41.846337 4018 log.go:181] (0xc0008c4fd0) Reply frame received for 1\nI1005 18:25:41.846375 4018 log.go:181] (0xc0008c4fd0) (0xc0003cbea0) Create stream\nI1005 18:25:41.846388 4018 log.go:181] (0xc0008c4fd0) (0xc0003cbea0) Stream added, broadcasting: 3\nI1005 18:25:41.847200 4018 log.go:181] (0xc0008c4fd0) Reply frame received for 3\nI1005 18:25:41.847242 4018 log.go:181] (0xc0008c4fd0) (0xc0001f81e0) Create stream\nI1005 18:25:41.847258 4018 log.go:181] (0xc0008c4fd0) (0xc0001f81e0) Stream added, broadcasting: 5\nI1005 18:25:41.848005 4018 log.go:181] (0xc0008c4fd0) Reply frame received for 5\nI1005 18:25:41.911333 4018 log.go:181] (0xc0008c4fd0) Data frame received for 5\nI1005 18:25:41.911358 4018 log.go:181] (0xc0001f81e0) (5) Data frame handling\nI1005 18:25:41.911370 4018 log.go:181] (0xc0001f81e0) (5) Data frame sent\nI1005 18:25:41.911379 4018 log.go:181] (0xc0008c4fd0) Data frame received for 5\nI1005 18:25:41.911386 4018 log.go:181] (0xc0001f81e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30366\nConnection to 172.18.0.15 30366 port [tcp/30366] succeeded!\nI1005 18:25:41.911406 4018 log.go:181] (0xc0008c4fd0) Data frame received for 3\nI1005 18:25:41.911411 4018 log.go:181] (0xc0003cbea0) (3) Data frame handling\nI1005 18:25:41.913482 4018 log.go:181] (0xc0008c4fd0) Data frame received for 1\nI1005 18:25:41.913497 4018 log.go:181] (0xc0001f9220) (1) Data frame handling\nI1005 18:25:41.913503 4018 log.go:181] (0xc0001f9220) (1) Data frame sent\nI1005 18:25:41.913511 4018 log.go:181] (0xc0008c4fd0) (0xc0001f9220) Stream removed, broadcasting: 1\nI1005 18:25:41.913799 4018 log.go:181] (0xc0008c4fd0) (0xc0001f9220) Stream removed, broadcasting: 1\nI1005 18:25:41.913819 4018 log.go:181] (0xc0008c4fd0) (0xc0003cbea0) Stream removed, broadcasting: 3\nI1005 18:25:41.913930 4018 log.go:181] (0xc0008c4fd0) (0xc0001f81e0) Stream removed, broadcasting: 5\nI1005 18:25:41.914010 4018 log.go:181] (0xc0008c4fd0) Go away received\n" Oct 5 18:25:41.919: INFO: stdout: "" Oct 5 18:25:41.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35633 --kubeconfig=/root/.kube/config exec --namespace=services-1278 execpodgs4t4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30366' Oct 5 18:25:42.157: INFO: stderr: "I1005 18:25:42.065153 4036 log.go:181] (0xc00003a0b0) (0xc000e10140) Create stream\nI1005 18:25:42.065211 4036 log.go:181] (0xc00003a0b0) (0xc000e10140) Stream added, broadcasting: 1\nI1005 18:25:42.066869 4036 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1005 18:25:42.066918 4036 log.go:181] (0xc00003a0b0) (0xc000b30000) Create stream\nI1005 18:25:42.066926 4036 log.go:181] (0xc00003a0b0) (0xc000b30000) Stream added, broadcasting: 3\nI1005 18:25:42.067646 4036 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1005 18:25:42.067675 4036 log.go:181] (0xc00003a0b0) (0xc000a51ea0) Create stream\nI1005 18:25:42.067682 4036 log.go:181] (0xc00003a0b0) (0xc000a51ea0) Stream added, broadcasting: 5\nI1005 18:25:42.068253 4036 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1005 18:25:42.149626 4036 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 18:25:42.149688 4036 log.go:181] (0xc000a51ea0) (5) Data frame handling\nI1005 18:25:42.149714 4036 log.go:181] (0xc000a51ea0) (5) Data frame sent\nI1005 18:25:42.149733 4036 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1005 18:25:42.149749 4036 log.go:181] (0xc000a51ea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30366\nConnection to 172.18.0.16 30366 port [tcp/30366] succeeded!\nI1005 18:25:42.149797 4036 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1005 18:25:42.149843 4036 log.go:181] (0xc000b30000) (3) Data frame handling\nI1005 18:25:42.151589 4036 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1005 18:25:42.151611 4036 log.go:181] (0xc000e10140) (1) Data frame handling\nI1005 18:25:42.151626 4036 log.go:181] (0xc000e10140) (1) Data frame sent\nI1005 18:25:42.151644 4036 log.go:181] (0xc00003a0b0) (0xc000e10140) Stream removed, broadcasting: 1\nI1005 18:25:42.151668 4036 log.go:181] (0xc00003a0b0) Go away received\nI1005 18:25:42.152155 4036 log.go:181] (0xc00003a0b0) (0xc000e10140) Stream removed, broadcasting: 1\nI1005 18:25:42.152184 4036 log.go:181] (0xc00003a0b0) (0xc000b30000) Stream removed, broadcasting: 3\nI1005 18:25:42.152197 4036 log.go:181] (0xc00003a0b0) (0xc000a51ea0) Stream removed, broadcasting: 5\n" Oct 5 18:25:42.158: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:42.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1278" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.279 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":292,"skipped":4766,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:42.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:42.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9216" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":293,"skipped":4803,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:42.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:25:42.475: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457" in namespace "security-context-test-8485" to be "Succeeded or Failed" Oct 5 18:25:42.498: INFO: Pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457": Phase="Pending", Reason="", readiness=false. Elapsed: 22.287511ms Oct 5 18:25:44.502: INFO: Pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026706227s Oct 5 18:25:46.506: INFO: Pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030935226s Oct 5 18:25:46.506: INFO: Pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457" satisfied condition "Succeeded or Failed" Oct 5 18:25:46.512: INFO: Got logs for pod "busybox-privileged-false-2423e90f-f256-466f-90d1-5bb127166457": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:46.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8485" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4823,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:46.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6d6c3459-b20c-4907-998d-b694af8823e6 STEP: Creating a pod to test consume secrets Oct 5 18:25:46.618: INFO: Waiting up to 5m0s for pod "pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0" in namespace "secrets-7841" to be "Succeeded or Failed" Oct 5 18:25:46.635: INFO: Pod "pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.750744ms Oct 5 18:25:48.639: INFO: Pod "pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020692883s Oct 5 18:25:50.924: INFO: Pod "pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305719506s STEP: Saw pod success Oct 5 18:25:50.924: INFO: Pod "pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0" satisfied condition "Succeeded or Failed" Oct 5 18:25:50.929: INFO: Trying to get logs from node latest-worker pod pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0 container secret-volume-test: STEP: delete the pod Oct 5 18:25:50.983: INFO: Waiting for pod pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0 to disappear Oct 5 18:25:51.139: INFO: Pod pod-secrets-e291cad8-0d33-4284-aa83-9dc9eccb3da0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:51.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7841" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4825,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:51.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:51.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1049" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":296,"skipped":4831,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:51.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 5 18:25:51.434: INFO: Created pod &Pod{ObjectMeta:{dns-37 dns-37 /api/v1/namespaces/dns-37/pods/dns-37 786ee8ad-eefb-4cc7-88e7-7226c3861e03 3421747 0 2020-10-05 18:25:51 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-10-05 18:25:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j78xc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j78xc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j78xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:25:51.450: INFO: The status of Pod dns-37 is Pending, waiting for it to be Running (with Ready = true) Oct 5 18:25:53.454: INFO: The status of Pod dns-37 is Pending, waiting for it to be Running (with Ready = true) Oct 5 18:25:55.453: INFO: The status of Pod dns-37 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 5 18:25:55.453: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-37 PodName:dns-37 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 18:25:55.453: INFO: >>> kubeConfig: /root/.kube/config I1005 18:25:55.490782 7 log.go:181] (0xc0029d0840) (0xc000e85540) Create stream I1005 18:25:55.490810 7 log.go:181] (0xc0029d0840) (0xc000e85540) Stream added, broadcasting: 1 I1005 18:25:55.492941 7 log.go:181] (0xc0029d0840) Reply frame received for 1 I1005 18:25:55.493023 7 log.go:181] (0xc0029d0840) (0xc000e855e0) Create stream I1005 18:25:55.493053 7 log.go:181] (0xc0029d0840) (0xc000e855e0) Stream added, broadcasting: 3 I1005 18:25:55.493949 7 log.go:181] (0xc0029d0840) Reply frame received for 3 I1005 18:25:55.493983 7 log.go:181] (0xc0029d0840) (0xc001459860) Create stream I1005 18:25:55.494002 7 log.go:181] (0xc0029d0840) (0xc001459860) Stream added, broadcasting: 5 I1005 18:25:55.494941 7 log.go:181] (0xc0029d0840) Reply frame received for 5 I1005 18:25:55.583710 7 log.go:181] (0xc0029d0840) Data frame received for 3 I1005 18:25:55.583731 7 log.go:181] (0xc000e855e0) (3) Data frame handling I1005 18:25:55.583739 7 log.go:181] (0xc000e855e0) (3) Data frame sent I1005 18:25:55.584949 7 log.go:181] (0xc0029d0840) Data frame received for 3 I1005 18:25:55.584964 7 log.go:181] (0xc000e855e0) (3) Data frame handling I1005 18:25:55.585265 7 log.go:181] (0xc0029d0840) Data frame received for 5 I1005 18:25:55.585290 7 log.go:181] (0xc001459860) (5) Data frame handling I1005 18:25:55.586726 7 log.go:181] (0xc0029d0840) Data frame received for 1 I1005 18:25:55.586750 7 log.go:181] (0xc000e85540) (1) Data frame handling I1005 18:25:55.586761 7 log.go:181] (0xc000e85540) (1) Data frame sent I1005 18:25:55.586799 7 log.go:181] (0xc0029d0840) (0xc000e85540) Stream removed, broadcasting: 1 I1005 18:25:55.586864 7 log.go:181] (0xc0029d0840) Go away received I1005 18:25:55.586923 7 log.go:181] (0xc0029d0840) (0xc000e85540) Stream removed, broadcasting: 1 I1005 18:25:55.586947 7 log.go:181] (0xc0029d0840) (0xc000e855e0) Stream removed, broadcasting: 3 I1005 18:25:55.586972 7 log.go:181] (0xc0029d0840) (0xc001459860) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Oct 5 18:25:55.587: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-37 PodName:dns-37 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 18:25:55.587: INFO: >>> kubeConfig: /root/.kube/config I1005 18:25:55.614779 7 log.go:181] (0xc0029d0f20) (0xc000e85860) Create stream I1005 18:25:55.614802 7 log.go:181] (0xc0029d0f20) (0xc000e85860) Stream added, broadcasting: 1 I1005 18:25:55.616733 7 log.go:181] (0xc0029d0f20) Reply frame received for 1 I1005 18:25:55.616770 7 log.go:181] (0xc0029d0f20) (0xc000e85900) Create stream I1005 18:25:55.616788 7 log.go:181] (0xc0029d0f20) (0xc000e85900) Stream added, broadcasting: 3 I1005 18:25:55.617717 7 log.go:181] (0xc0029d0f20) Reply frame received for 3 I1005 18:25:55.617754 7 log.go:181] (0xc0029d0f20) (0xc0027ad360) Create stream I1005 18:25:55.617767 7 log.go:181] (0xc0029d0f20) (0xc0027ad360) Stream added, broadcasting: 5 I1005 18:25:55.618683 7 log.go:181] (0xc0029d0f20) Reply frame received for 5 I1005 18:25:55.687911 7 log.go:181] (0xc0029d0f20) Data frame received for 3 I1005 18:25:55.687932 7 log.go:181] (0xc000e85900) (3) Data frame handling I1005 18:25:55.687943 7 log.go:181] (0xc000e85900) (3) Data frame sent I1005 18:25:55.688825 7 log.go:181] (0xc0029d0f20) Data frame received for 3 I1005 18:25:55.689015 7 log.go:181] (0xc000e85900) (3) Data frame handling I1005 18:25:55.689059 7 log.go:181] (0xc0029d0f20) Data frame received for 5 I1005 18:25:55.689120 7 log.go:181] (0xc0027ad360) (5) Data frame handling I1005 18:25:55.690700 7 log.go:181] (0xc0029d0f20) Data frame received for 1 I1005 18:25:55.690725 7 log.go:181] (0xc000e85860) (1) Data frame handling I1005 18:25:55.690742 7 log.go:181] (0xc000e85860) (1) Data frame sent I1005 18:25:55.690758 7 log.go:181] (0xc0029d0f20) (0xc000e85860) Stream removed, broadcasting: 1 I1005 18:25:55.690803 7 log.go:181] (0xc0029d0f20) Go away received I1005 18:25:55.690839 7 log.go:181] (0xc0029d0f20) (0xc000e85860) Stream removed, broadcasting: 1 I1005 18:25:55.690871 7 log.go:181] (0xc0029d0f20) (0xc000e85900) Stream removed, broadcasting: 3 I1005 18:25:55.690883 7 log.go:181] (0xc0029d0f20) (0xc0027ad360) Stream removed, broadcasting: 5 Oct 5 18:25:55.690: INFO: Deleting pod dns-37... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:25:55.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-37" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":297,"skipped":4850,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:25:55.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 5 18:25:55.826: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:26:11.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2153" for this suite. • [SLOW TEST:15.882 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":298,"skipped":4858,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:26:11.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:26:16.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3902" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4868,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:26:16.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 18:26:16.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed" in namespace "downward-api-1424" to be "Succeeded or Failed" Oct 5 18:26:16.204: INFO: Pod "downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed": Phase="Pending", Reason="", readiness=false. Elapsed: 27.547594ms Oct 5 18:26:18.209: INFO: Pod "downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031915684s Oct 5 18:26:20.213: INFO: Pod "downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036598828s STEP: Saw pod success Oct 5 18:26:20.213: INFO: Pod "downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed" satisfied condition "Succeeded or Failed" Oct 5 18:26:20.217: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed container client-container: STEP: delete the pod Oct 5 18:26:20.238: INFO: Waiting for pod downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed to disappear Oct 5 18:26:20.253: INFO: Pod downwardapi-volume-50082030-c4df-4806-bb1a-09b2a66c99ed no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:26:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1424" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4876,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:26:20.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 18:26:28.393: INFO: DNS probes using dns-test-9e9630a6-c4d7-4142-ba00-f9143871c0de succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 18:26:36.652: INFO: File wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:36.655: INFO: File jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:36.655: INFO: Lookups using dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b failed for: [wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local] Oct 5 18:26:41.661: INFO: File wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:41.663: INFO: File jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:41.663: INFO: Lookups using dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b failed for: [wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local] Oct 5 18:26:46.659: INFO: File wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:46.663: INFO: File jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:46.663: INFO: Lookups using dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b failed for: [wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local] Oct 5 18:26:51.659: INFO: File wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:51.663: INFO: File jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:51.663: INFO: Lookups using dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b failed for: [wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local] Oct 5 18:26:56.660: INFO: File wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:56.664: INFO: File jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local from pod dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 18:26:56.664: INFO: Lookups using dns-8058/dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b failed for: [wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local] Oct 5 18:27:01.661: INFO: DNS probes using dns-test-b5f34ab5-d24d-427b-a900-b361fce76d5b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8058.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8058.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 18:27:10.276: INFO: DNS probes using dns-test-d01a42e9-50c5-4b03-b467-4c857f095d38 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:27:10.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8058" for this suite. • [SLOW TEST:50.130 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":301,"skipped":4876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:27:10.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:27:10.773: INFO: Creating deployment "webserver-deployment" Oct 5 18:27:10.840: INFO: Waiting for observed generation 1 Oct 5 18:27:12.996: INFO: Waiting for all required pods to come up Oct 5 18:27:13.001: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 5 18:27:23.039: INFO: Waiting for deployment "webserver-deployment" to complete Oct 5 18:27:23.046: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 5 18:27:23.053: INFO: Updating deployment webserver-deployment Oct 5 18:27:23.053: INFO: Waiting for observed generation 2 Oct 5 18:27:25.069: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 5 18:27:25.071: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 5 18:27:25.074: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 5 18:27:25.081: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 5 18:27:25.081: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 5 18:27:25.084: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 5 18:27:25.088: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 5 18:27:25.088: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 5 18:27:25.095: INFO: Updating deployment webserver-deployment Oct 5 18:27:25.095: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 5 18:27:25.202: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 5 18:27:25.232: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 18:27:25.827: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6526 /apis/apps/v1/namespaces/deployment-6526/deployments/webserver-deployment 1389e9cd-a74e-4254-bf25-538850cfaae1 3422440 3 2020-10-05 18:27:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0063f43c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-10-05 18:27:23 +0000 UTC,LastTransitionTime:2020-10-05 18:27:10 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-05 18:27:25 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 5 18:27:26.469: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6526 /apis/apps/v1/namespaces/deployment-6526/replicasets/webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 3422419 3 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1389e9cd-a74e-4254-bf25-538850cfaae1 0xc0063f4867 0xc0063f4868}] [] [{kube-controller-manager Update apps/v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1389e9cd-a74e-4254-bf25-538850cfaae1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0063f48e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 18:27:26.469: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 5 18:27:26.470: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6526 /apis/apps/v1/namespaces/deployment-6526/replicasets/webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 3422479 3 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1389e9cd-a74e-4254-bf25-538850cfaae1 0xc0063f4947 0xc0063f4948}] [] [{kube-controller-manager Update apps/v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1389e9cd-a74e-4254-bf25-538850cfaae1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0063f49b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 5 18:27:26.892: INFO: Pod "webserver-deployment-795d758f88-6q7br" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6q7br webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-6q7br 4513fb4a-7400-4aa6-9f3b-e56f38a91da2 3422384 0 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c1a7 0xc00645c1a8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-10-05 18:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.892: INFO: Pod "webserver-deployment-795d758f88-94s8k" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-94s8k webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-94s8k b752d6e4-531c-46e3-b9fb-87028c6ba48f 3422414 0 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c3d7 0xc00645c3d8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-10-05 18:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.893: INFO: Pod "webserver-deployment-795d758f88-994nj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-994nj webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-994nj bb9ddd13-cccb-4c41-867d-1f30d410f15d 3422460 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c587 0xc00645c588}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.893: INFO: Pod "webserver-deployment-795d758f88-dxczr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dxczr webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-dxczr b0d2ad9a-4ced-4138-ad1d-1b2a4cc54660 3422459 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c6c7 0xc00645c6c8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.893: INFO: Pod "webserver-deployment-795d758f88-fvvt8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fvvt8 webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-fvvt8 e018dab3-578a-478b-8050-97b2b69f8915 3422392 0 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c807 0xc00645c808}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-05 18:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.894: INFO: Pod "webserver-deployment-795d758f88-lrg7x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-lrg7x webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-lrg7x 187cfda8-b61e-400a-805d-096ac0f20faf 3422413 0 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645c9f7 0xc00645c9f8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-05 18:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.894: INFO: Pod "webserver-deployment-795d758f88-rmtx6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rmtx6 webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-rmtx6 4c8d14cf-b617-4f26-864c-d036739cceab 3422494 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645cbb7 0xc00645cbb8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.894: INFO: Pod "webserver-deployment-795d758f88-v6dwb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v6dwb webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-v6dwb 90cd6a03-2979-46d8-b43d-4f0aefcbc2e6 3422466 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645cd27 0xc00645cd28}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.894: INFO: Pod "webserver-deployment-795d758f88-w4bf4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w4bf4 webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-w4bf4 2a3cb59d-b092-4b19-9340-1c1b79dfd33c 3422477 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645ce97 0xc00645ce98}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.895: INFO: Pod "webserver-deployment-795d758f88-wnrgs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wnrgs webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-wnrgs bab34c83-6f9a-4699-8c9a-f81cad1017ff 3422438 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645cff7 0xc00645cff8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.895: INFO: Pod "webserver-deployment-795d758f88-xk8kq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xk8kq webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-xk8kq 9f534a7e-f5c2-47f5-b278-83d8d75ca950 3422467 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645d187 0xc00645d188}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.895: INFO: Pod "webserver-deployment-795d758f88-xzxpj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xzxpj webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-xzxpj edf72d17-72b0-437e-b7f4-67af84d42fec 3422475 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645d307 0xc00645d308}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.895: INFO: Pod "webserver-deployment-795d758f88-zfkkr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zfkkr webserver-deployment-795d758f88- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-795d758f88-zfkkr c2d71d9f-1fdb-4b82-b192-bf62a6b3a4cd 3422382 0 2020-10-05 18:27:23 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e60d2a98-a221-4873-a9c4-2d751f94a995 0xc00645d487 0xc00645d488}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e60d2a98-a221-4873-a9c4-2d751f94a995\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-05 18:27:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.895: INFO: Pod "webserver-deployment-dd94f59b7-4vg4r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4vg4r webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-4vg4r c1a7519e-2e1d-46cb-aa0f-cf5fb27ee42e 3422469 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645d687 0xc00645d688}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.896: INFO: Pod "webserver-deployment-dd94f59b7-564w2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-564w2 webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-564w2 821fc1d8-205d-4f44-9692-0878cf1c94ee 3422307 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645d7f7 0xc00645d7f8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.82,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ed1ebdfa594a57e2db449c76426aeda03608307f34d24117ff8cab856c590b8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.896: INFO: Pod "webserver-deployment-dd94f59b7-6fll6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6fll6 webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-6fll6 ac45369f-677f-4a96-b0b5-8b2e757c5299 3422457 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645da17 0xc00645da18}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.896: INFO: Pod "webserver-deployment-dd94f59b7-9rv9x" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9rv9x webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-9rv9x d8532caf-4380-428d-bd7a-16b05b1191c1 3422341 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645db77 0xc00645db78}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.100,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://480d50979c3397d9f81a23f2bd6ddbaf07db0f63c48a779c1cdbb100b6abcd6d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.896: INFO: Pod "webserver-deployment-dd94f59b7-bbjd9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bbjd9 webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-bbjd9 98c7f7d8-91c4-4440-a511-029facaa95d1 3422468 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645dd37 0xc00645dd38}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.897: INFO: Pod "webserver-deployment-dd94f59b7-bhghn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bhghn webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-bhghn be2f91ee-7078-46ba-b5d4-139116edb482 3422458 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc00645ded7 0xc00645ded8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.897: INFO: Pod "webserver-deployment-dd94f59b7-bldqg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bldqg webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-bldqg 037b78fd-f077-460f-89be-df4eefc545e8 3422352 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4017 0xc0064d4018}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.84,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://11aeafa1b52781a44519243190ca8799afe7ba0e60858e3f91f5c0480e52adc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.897: INFO: Pod "webserver-deployment-dd94f59b7-gw6hg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gw6hg webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-gw6hg 6fbce0d6-4cf6-45a5-9fbd-d0cabca34f62 3422454 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4217 0xc0064d4218}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.897: INFO: Pod "webserver-deployment-dd94f59b7-hth6l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hth6l webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-hth6l c969380c-2ffe-4fcc-ae09-e25391ec8327 3422318 0 2020-10-05 18:27:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4367 0xc0064d4368}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.85,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0bfafd9b1f70a423db1e6291778558cfd44d4e68edc18947c7a6e65a6c293b91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.897: INFO: Pod "webserver-deployment-dd94f59b7-jjph7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jjph7 webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-jjph7 7d777a05-a9dd-4dd8-ad89-f9fa297488b2 3422464 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4527 0xc0064d4528}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.898: INFO: Pod "webserver-deployment-dd94f59b7-k9j5q" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-k9j5q webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-k9j5q 5927956a-7487-4836-b448-585de546f5e4 3422310 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4657 0xc0064d4658}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.83,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a44b95327e5a6eb8f16019e3e2628ff5abd6c214e711d99bf00190d0923ee459,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.898: INFO: Pod "webserver-deployment-dd94f59b7-krqlx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-krqlx webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-krqlx 91e6a055-dc3a-4951-8372-ad67391fbede 3422485 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4877 0xc0064d4878}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-05 18:27:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.898: INFO: Pod "webserver-deployment-dd94f59b7-l5smb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l5smb webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-l5smb 9b7ac716-3187-4049-82eb-5ae86a93a825 3422284 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4a47 0xc0064d4a48}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.99,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9ce57364af83c60bf73f62b180bd6e8e52ebc8b5402ab5af8c0030ce1c16e5c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.898: INFO: Pod "webserver-deployment-dd94f59b7-l656c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l656c webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-l656c 9d8decf1-72b4-4f08-973e-020e2bf3d434 3422446 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4c97 0xc0064d4c98}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-pjqmc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pjqmc webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-pjqmc 8fb07883-31d7-4852-b808-35887bfba9ba 3422428 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4e07 0xc0064d4e08}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-tv4gx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tv4gx webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-tv4gx d22122ba-24c0-4ab5-af96-58f66fbabcb3 3422497 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d4f67 0xc0064d4f68}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2020-10-05 18:27:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-vbbpt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vbbpt webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-vbbpt e163fbd0-1883-4c1a-b953-4732bb0c1eca 3422478 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d5107 0xc0064d5108}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-vkrzl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vkrzl webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-vkrzl 3d19a260-47f5-4f45-87b3-84fc3edde8df 3422258 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d5307 0xc0064d5308}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.98\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.98,StartTime:2020-10-05 18:27:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://049eb327102c6665296558e4afaea4a32c5ea76b3fe8118206830d7a76397b3b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.98,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-ws67d" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ws67d webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-ws67d dc7eef3e-0da2-472e-9919-180be1c23b76 3422465 0 2020-10-05 18:27:25 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d5507 0xc0064d5508}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 18:27:26.899: INFO: Pod "webserver-deployment-dd94f59b7-xqpv6" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xqpv6 webserver-deployment-dd94f59b7- deployment-6526 /api/v1/namespaces/deployment-6526/pods/webserver-deployment-dd94f59b7-xqpv6 79092b64-93b6-497f-be16-4cc696251eb4 3422264 0 2020-10-05 18:27:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ab94324b-171b-4a94-8584-2d4be776bec1 0xc0064d56b7 0xc0064d56b8}] [] [{kube-controller-manager Update v1 2020-10-05 18:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab94324b-171b-4a94-8584-2d4be776bec1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 18:27:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrw76,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrw76,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrw76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 18:27:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.2.81,StartTime:2020-10-05 18:27:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 18:27:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5b764464a1fa01f9c0fbd5579426e3e6233c61049a6fc1e8670ccef07825ea3a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:27:26.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6526" for this suite. • [SLOW TEST:17.075 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":302,"skipped":4901,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 18:27:27.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 18:27:32.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 18:27:34.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:27:36.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:27:39.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:27:40.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:27:43.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 18:27:44.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519252, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737519251, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 18:27:48.053: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 18:27:48.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6536-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 18:27:50.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5195" for this suite. STEP: Destroying namespace "webhook-5195-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.879 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":303,"skipped":4902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSOct 5 18:27:53.343: INFO: Running AfterSuite actions on all nodes Oct 5 18:27:53.343: INFO: Running AfterSuite actions on node 1 Oct 5 18:27:53.343: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6328.843 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS