I1027 10:31:27.410194 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1027 10:31:27.410375 7 e2e.go:129] Starting e2e run "4362a309-be96-4b27-a074-633bc102f0de" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1603794686 - Will randomize all specs Will run 303 of 5232 specs Oct 27 10:31:27.471: INFO: >>> kubeConfig: /root/.kube/config Oct 27 10:31:27.474: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 27 10:31:27.494: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 27 10:31:27.520: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 27 10:31:27.520: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 27 10:31:27.520: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 27 10:31:27.527: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 27 10:31:27.527: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 27 10:31:27.527: INFO: e2e test version: v1.19.2 Oct 27 10:31:27.528: INFO: kube-apiserver version: v1.19.0 Oct 27 10:31:27.528: INFO: >>> kubeConfig: /root/.kube/config Oct 27 10:31:27.533: INFO: Cluster IP family: ipv4 SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:27.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Oct 27 10:31:27.630: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 27 10:31:32.731: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:31:32.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6585" for this suite. • [SLOW TEST:5.242 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:32.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:31:32.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config version' Oct 27 10:31:32.963: INFO: stderr: "" Oct 27 10:31:32.963: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.2\", GitCommit:\"f5743093fd1c663cb0cbc89748f730662345d44d\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T13:41:02Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:31:32.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9369" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:32.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:31:33.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4646" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":32,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:33.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 10:31:33.938: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 10:31:35.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391493, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391493, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391494, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391493, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 10:31:39.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:31:39.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4831" for this suite. STEP: Destroying namespace "webhook-4831-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":4,"skipped":42,"failed":0} [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:39.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:31:39.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8662" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":5,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:31:39.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7786 STEP: creating service affinity-clusterip in namespace services-7786 STEP: creating replication controller affinity-clusterip in namespace services-7786 I1027 10:31:40.099504 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-7786, replica count: 3 I1027 10:31:43.149901 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:31:46.150141 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:31:49.150463 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 10:31:49.157: INFO: Creating new exec pod Oct 27 10:31:54.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7786 execpod-affinity6knlh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Oct 27 10:31:57.343: INFO: stderr: "I1027 10:31:57.237087 45 log.go:181] (0xc000e3c8f0) (0xc000afa500) Create stream\nI1027 10:31:57.237146 45 log.go:181] (0xc000e3c8f0) (0xc000afa500) Stream added, broadcasting: 1\nI1027 10:31:57.242471 45 log.go:181] (0xc000e3c8f0) Reply frame received for 1\nI1027 10:31:57.242526 45 log.go:181] (0xc000e3c8f0) (0xc0005ee000) Create stream\nI1027 10:31:57.242545 45 log.go:181] (0xc000e3c8f0) (0xc0005ee000) Stream added, broadcasting: 3\nI1027 10:31:57.243321 45 log.go:181] (0xc000e3c8f0) Reply frame received for 3\nI1027 10:31:57.243352 45 log.go:181] (0xc000e3c8f0) (0xc000afa0a0) Create stream\nI1027 10:31:57.243367 45 log.go:181] (0xc000e3c8f0) (0xc000afa0a0) Stream added, broadcasting: 5\nI1027 10:31:57.244267 45 log.go:181] (0xc000e3c8f0) Reply frame received for 5\nI1027 10:31:57.336634 45 log.go:181] (0xc000e3c8f0) Data frame received for 5\nI1027 10:31:57.336662 45 log.go:181] (0xc000afa0a0) (5) Data frame handling\nI1027 10:31:57.336669 45 log.go:181] (0xc000afa0a0) (5) Data frame sent\nI1027 10:31:57.336674 45 log.go:181] (0xc000e3c8f0) Data frame received for 5\nI1027 10:31:57.336678 45 log.go:181] (0xc000afa0a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1027 10:31:57.336709 45 log.go:181] (0xc000e3c8f0) Data frame received for 3\nI1027 10:31:57.336752 45 log.go:181] (0xc0005ee000) (3) Data frame handling\nI1027 10:31:57.338219 45 log.go:181] (0xc000e3c8f0) Data frame received for 1\nI1027 10:31:57.338240 45 log.go:181] (0xc000afa500) (1) Data frame handling\nI1027 10:31:57.338251 45 log.go:181] (0xc000afa500) (1) Data frame sent\nI1027 10:31:57.338265 45 log.go:181] (0xc000e3c8f0) (0xc000afa500) Stream removed, broadcasting: 1\nI1027 10:31:57.338328 45 log.go:181] (0xc000e3c8f0) Go away received\nI1027 10:31:57.338889 45 log.go:181] (0xc000e3c8f0) (0xc000afa500) Stream removed, broadcasting: 1\nI1027 10:31:57.338925 45 log.go:181] (0xc000e3c8f0) (0xc0005ee000) Stream removed, broadcasting: 3\nI1027 10:31:57.338951 45 log.go:181] (0xc000e3c8f0) (0xc000afa0a0) Stream removed, broadcasting: 5\n" Oct 27 10:31:57.344: INFO: stdout: "" Oct 27 10:31:57.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7786 execpod-affinity6knlh -- /bin/sh -x -c nc -zv -t -w 2 10.101.173.59 80' Oct 27 10:31:57.559: INFO: stderr: "I1027 10:31:57.479982 64 log.go:181] (0xc00018dad0) (0xc0005d3b80) Create stream\nI1027 10:31:57.480034 64 log.go:181] (0xc00018dad0) (0xc0005d3b80) Stream added, broadcasting: 1\nI1027 10:31:57.488712 64 log.go:181] (0xc00018dad0) Reply frame received for 1\nI1027 10:31:57.488769 64 log.go:181] (0xc00018dad0) (0xc0005d3c20) Create stream\nI1027 10:31:57.488783 64 log.go:181] (0xc00018dad0) (0xc0005d3c20) Stream added, broadcasting: 3\nI1027 10:31:57.490028 64 log.go:181] (0xc00018dad0) Reply frame received for 3\nI1027 10:31:57.490062 64 log.go:181] (0xc00018dad0) (0xc000c2e000) Create stream\nI1027 10:31:57.490071 64 log.go:181] (0xc00018dad0) (0xc000c2e000) Stream added, broadcasting: 5\nI1027 10:31:57.490985 64 log.go:181] (0xc00018dad0) Reply frame received for 5\nI1027 10:31:57.552699 64 log.go:181] (0xc00018dad0) Data frame received for 5\nI1027 10:31:57.552751 64 log.go:181] (0xc000c2e000) (5) Data frame handling\nI1027 10:31:57.552771 64 log.go:181] (0xc000c2e000) (5) Data frame sent\nI1027 10:31:57.552785 64 log.go:181] (0xc00018dad0) Data frame received for 5\nI1027 10:31:57.552796 64 log.go:181] (0xc000c2e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.173.59 80\nConnection to 10.101.173.59 80 port [tcp/http] succeeded!\nI1027 10:31:57.552911 64 log.go:181] (0xc00018dad0) Data frame received for 3\nI1027 10:31:57.552934 64 log.go:181] (0xc0005d3c20) (3) Data frame handling\nI1027 10:31:57.554567 64 log.go:181] (0xc00018dad0) Data frame received for 1\nI1027 10:31:57.554588 64 log.go:181] (0xc0005d3b80) (1) Data frame handling\nI1027 10:31:57.554598 64 log.go:181] (0xc0005d3b80) (1) Data frame sent\nI1027 10:31:57.554607 64 log.go:181] (0xc00018dad0) (0xc0005d3b80) Stream removed, broadcasting: 1\nI1027 10:31:57.554618 64 log.go:181] (0xc00018dad0) Go away received\nI1027 10:31:57.555023 64 log.go:181] (0xc00018dad0) (0xc0005d3b80) Stream removed, broadcasting: 1\nI1027 10:31:57.555053 64 log.go:181] (0xc00018dad0) (0xc0005d3c20) Stream removed, broadcasting: 3\nI1027 10:31:57.555062 64 log.go:181] (0xc00018dad0) (0xc000c2e000) Stream removed, broadcasting: 5\n" Oct 27 10:31:57.559: INFO: stdout: "" Oct 27 10:31:57.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7786 execpod-affinity6knlh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.173.59:80/ ; done' Oct 27 10:31:57.879: INFO: stderr: "I1027 10:31:57.711344 82 log.go:181] (0xc00003ac60) (0xc000c1ac80) Create stream\nI1027 10:31:57.711405 82 log.go:181] (0xc00003ac60) (0xc000c1ac80) Stream added, broadcasting: 1\nI1027 10:31:57.714042 82 log.go:181] (0xc00003ac60) Reply frame received for 1\nI1027 10:31:57.714089 82 log.go:181] (0xc00003ac60) (0xc000c1ad20) Create stream\nI1027 10:31:57.714112 82 log.go:181] (0xc00003ac60) (0xc000c1ad20) Stream added, broadcasting: 3\nI1027 10:31:57.715439 82 log.go:181] (0xc00003ac60) Reply frame received for 3\nI1027 10:31:57.715484 82 log.go:181] (0xc00003ac60) (0xc000d041e0) Create stream\nI1027 10:31:57.715528 82 log.go:181] (0xc00003ac60) (0xc000d041e0) Stream added, broadcasting: 5\nI1027 10:31:57.716476 82 log.go:181] (0xc00003ac60) Reply frame received for 5\nI1027 10:31:57.775877 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.775906 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.775916 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.775931 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.775936 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.775943 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.783103 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.783134 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.783153 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.783220 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.783243 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.783262 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\nI1027 10:31:57.783368 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.783390 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.783401 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.783437 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.783472 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.783517 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.788303 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.788331 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.788349 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.789226 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.789244 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.789251 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.789273 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.789294 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.789308 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.794195 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.794219 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.794244 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.795086 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.795112 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.795123 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.795137 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.795146 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.795154 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.800680 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.800708 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.800720 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.800737 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.800745 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.800755 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.804232 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.804251 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.804264 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.804935 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.804966 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.804984 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.805009 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.805031 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.805044 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.810705 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.810736 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.810759 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.811013 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.811047 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.811066 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.811086 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.811099 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.811119 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.815336 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.815365 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.815387 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.816230 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.816273 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.816304 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.816329 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.816348 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.816381 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.820459 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.820470 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.820476 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.821471 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.821485 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.821491 82 log.go:181] (0xc000d041e0) (5) Data frame sent\nI1027 10:31:57.821496 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.821500 82 log.go:181] (0xc000d041e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.821511 82 log.go:181] (0xc000d041e0) (5) Data frame sent\nI1027 10:31:57.821543 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.821549 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.821553 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.825625 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.825647 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.825670 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.826342 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.826362 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.826373 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.826388 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.826396 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.826404 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.833435 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.833456 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.833468 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.833957 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.833974 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.833984 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.834016 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.834041 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.834057 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.837840 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.837857 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.837868 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.838717 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.838740 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.838753 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.838770 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.838779 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.838789 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.842520 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.842571 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.842589 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.842609 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.842629 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.842654 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.842666 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.842678 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.842697 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.847456 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.847479 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.847497 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.848518 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.848579 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.848609 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.848645 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.848673 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.848717 82 log.go:181] (0xc000d041e0) (5) Data frame sent\nI1027 10:31:57.848734 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.848756 82 log.go:181] (0xc000d041e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.848806 82 log.go:181] (0xc000d041e0) (5) Data frame sent\nI1027 10:31:57.855178 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.855198 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.855214 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.856056 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.856093 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.856112 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.856141 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.856160 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.856183 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.862107 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.862122 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.862130 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.862740 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.862751 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.862758 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.862783 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.862791 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.862800 82 log.go:181] (0xc000d041e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.59:80/\nI1027 10:31:57.870175 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.870205 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.870229 82 log.go:181] (0xc000c1ad20) (3) Data frame sent\nI1027 10:31:57.871205 82 log.go:181] (0xc00003ac60) Data frame received for 5\nI1027 10:31:57.871227 82 log.go:181] (0xc000d041e0) (5) Data frame handling\nI1027 10:31:57.871267 82 log.go:181] (0xc00003ac60) Data frame received for 3\nI1027 10:31:57.871295 82 log.go:181] (0xc000c1ad20) (3) Data frame handling\nI1027 10:31:57.873319 82 log.go:181] (0xc00003ac60) Data frame received for 1\nI1027 10:31:57.873337 82 log.go:181] (0xc000c1ac80) (1) Data frame handling\nI1027 10:31:57.873352 82 log.go:181] (0xc000c1ac80) (1) Data frame sent\nI1027 10:31:57.873505 82 log.go:181] (0xc00003ac60) (0xc000c1ac80) Stream removed, broadcasting: 1\nI1027 10:31:57.873533 82 log.go:181] (0xc00003ac60) Go away received\nI1027 10:31:57.873915 82 log.go:181] (0xc00003ac60) (0xc000c1ac80) Stream removed, broadcasting: 1\nI1027 10:31:57.873931 82 log.go:181] (0xc00003ac60) (0xc000c1ad20) Stream removed, broadcasting: 3\nI1027 10:31:57.873938 82 log.go:181] (0xc00003ac60) (0xc000d041e0) Stream removed, broadcasting: 5\n" Oct 27 10:31:57.880: INFO: stdout: "\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh\naffinity-clusterip-zdtxh" Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Received response from host: affinity-clusterip-zdtxh Oct 27 10:31:57.880: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-7786, will wait for the garbage collector to delete the pods Oct 27 10:31:58.024: INFO: Deleting ReplicationController affinity-clusterip took: 6.450247ms Oct 27 10:31:58.524: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.358816ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:32:08.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7786" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.387 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":6,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:32:09.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 27 10:32:09.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 27 10:32:09.122: INFO: Waiting for terminating namespaces to be deleted... Oct 27 10:32:09.124: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 27 10:32:09.178: INFO: rally-e7d2dadc-tw7qtz83-w4xjs from c-rally-e7d2dadc-149d1opd started at 2020-10-27 10:32:03 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.178: INFO: Container rally-e7d2dadc-tw7qtz83 ready: true, restart count 0 Oct 27 10:32:09.178: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.178: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:32:09.178: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.178: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 10:32:09.178: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 27 10:32:09.184: INFO: rally-e7d2dadc-tw7qtz83-pj4mf from c-rally-e7d2dadc-149d1opd started at 2020-10-27 10:32:03 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.184: INFO: Container rally-e7d2dadc-tw7qtz83 ready: true, restart count 0 Oct 27 10:32:09.184: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.184: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:32:09.184: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:32:09.184: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3bceb8cd-2a46-41d6-bd8f-8d46ef886821 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3bceb8cd-2a46-41d6-bd8f-8d46ef886821 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3bceb8cd-2a46-41d6-bd8f-8d46ef886821 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:32:17.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9455" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.647 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":7,"skipped":115,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:32:17.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:32:17.737: INFO: Creating deployment "webserver-deployment" Oct 27 10:32:17.746: INFO: Waiting for observed generation 1 Oct 27 10:32:19.816: INFO: Waiting for all required pods to come up Oct 27 10:32:20.095: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 27 10:32:32.160: INFO: Waiting for deployment "webserver-deployment" to complete Oct 27 10:32:32.167: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 27 10:32:32.174: INFO: Updating deployment webserver-deployment Oct 27 10:32:32.174: INFO: Waiting for observed generation 2 Oct 27 10:32:34.475: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 27 10:32:34.478: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 27 10:32:34.844: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 27 10:32:35.158: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 27 10:32:35.158: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 27 10:32:35.160: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 27 10:32:35.164: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 27 10:32:35.164: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 27 10:32:35.172: INFO: Updating deployment webserver-deployment Oct 27 10:32:35.172: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 27 10:32:35.585: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 27 10:32:38.120: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 27 10:32:39.042: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3822 /apis/apps/v1/namespaces/deployment-3822/deployments/webserver-deployment aaef1576-5212-48ef-8158-158d5fe58d84 8955623 3 2020-10-27 10:32:17 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036cb148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-27 10:32:35 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-10-27 10:32:35 +0000 UTC,LastTransitionTime:2020-10-27 10:32:17 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 27 10:32:39.262: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3822 /apis/apps/v1/namespaces/deployment-3822/replicasets/webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 8955611 3 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment aaef1576-5212-48ef-8158-158d5fe58d84 0xc0036cb637 0xc0036cb638}] [] [{kube-controller-manager Update apps/v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aaef1576-5212-48ef-8158-158d5fe58d84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036cb6b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 10:32:39.262: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 27 10:32:39.262: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-3822 /apis/apps/v1/namespaces/deployment-3822/replicasets/webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 8955619 3 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment aaef1576-5212-48ef-8158-158d5fe58d84 0xc0036cb717 0xc0036cb718}] [] [{kube-controller-manager Update apps/v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aaef1576-5212-48ef-8158-158d5fe58d84\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003298068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 27 10:32:39.271: INFO: Pod "webserver-deployment-795d758f88-4ldjr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4ldjr webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-4ldjr a10af964-3af9-45f5-8d9a-16b24d724b15 8955675 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc003121e50 0xc003121e51}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.271: INFO: Pod "webserver-deployment-795d758f88-4ljpl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4ljpl webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-4ljpl de58eab1-0ffa-434b-ba31-0ccf5c8a02e9 8955521 0 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4007 0xc0033a4008}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.271: INFO: Pod "webserver-deployment-795d758f88-5fk9h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5fk9h webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-5fk9h 43cde5f7-0902-41c5-a74f-9611cd4ae473 8955644 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a41b7 0xc0033a41b8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.271: INFO: Pod "webserver-deployment-795d758f88-5l2qh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5l2qh webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-5l2qh 3e912119-c860-45e7-909f-22693f13c6b0 8955511 0 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4367 0xc0033a4368}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-6h8xw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6h8xw webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-6h8xw 159b4aea-6f48-4151-88d2-2031cd56a892 8955646 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4517 0xc0033a4518}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-72wmv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-72wmv webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-72wmv 568451c7-8b64-4c98-a268-4226fb56b43a 8955648 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a46c7 0xc0033a46c8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-8kdtr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8kdtr webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-8kdtr 2d18a4ee-f05d-41be-9d52-c53f471cdeb4 8955621 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4877 0xc0033a4878}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-fvz9c" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fvz9c webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-fvz9c 752cc722-c90b-47a3-8845-d77f2920a667 8955498 0 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4a27 0xc0033a4a28}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-jbps4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jbps4 webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-jbps4 939d30b8-2d46-4c61-b26b-e91f72ae8c33 8955499 0 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4bd7 0xc0033a4bd8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.272: INFO: Pod "webserver-deployment-795d758f88-m56cc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m56cc webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-m56cc 8d4ef79c-df05-429b-8d0e-5ced6b176d69 8955515 0 2020-10-27 10:32:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4d87 0xc0033a4d88}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.273: INFO: Pod "webserver-deployment-795d758f88-sg9bw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sg9bw webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-sg9bw 44785c7c-9d86-4966-8e35-e940695e3fd1 8955640 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a4f37 0xc0033a4f38}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.273: INFO: Pod "webserver-deployment-795d758f88-tvc26" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tvc26 webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-tvc26 61d6e53c-331d-4b99-9554-737f56f8924d 8955628 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a50e7 0xc0033a50e8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.273: INFO: Pod "webserver-deployment-795d758f88-zc9dk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zc9dk webserver-deployment-795d758f88- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-795d758f88-zc9dk ed398f07-e207-40c8-b30c-017f728ab272 8955642 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 92ba1718-1128-4c4f-ad87-3274c67dd4fb 0xc0033a5297 0xc0033a5298}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92ba1718-1128-4c4f-ad87-3274c67dd4fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.273: INFO: Pod "webserver-deployment-dd94f59b7-bpfx6" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bpfx6 webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-bpfx6 c92eef7e-7431-44c0-911b-ef300e379cab 8955437 0 2020-10-27 10:32:18 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5447 0xc0033a5448}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.190\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.190,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2bce3fd4c516355968641c874b5f1ba9713d6fc3d273f929d3068fb2187dc28d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.190,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.273: INFO: Pod "webserver-deployment-dd94f59b7-ch88c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ch88c webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-ch88c a02e13be-a959-4f43-a861-2a9bdc769c14 8955632 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a55f7 0xc0033a55f8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-gmt2q" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gmt2q webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-gmt2q 04020a28-529c-4ad6-9e05-0b8bf4b5d3d6 8955670 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5787 0xc0033a5788}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-h2xzj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h2xzj webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-h2xzj 1be56ca1-d1bc-4ca2-af9b-e2e681f699fb 8955669 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5917 0xc0033a5918}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-j4v2z" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-j4v2z webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-j4v2z 5a506cb6-a48b-4d14-841d-e9f63566e92a 8955673 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5aa7 0xc0033a5aa8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-knj78" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-knj78 webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-knj78 3e3ed54c-90d2-4445-b0a9-c93edca372c7 8955444 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5c37 0xc0033a5c38}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.189,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bae5c0d8b7e2fde5a6323b390983fb0c254b5c20c986ec9767ee91f59facc0e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-kx5lf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kx5lf webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-kx5lf 3c1acb71-e47f-4db4-8c1e-b26817031dc1 8955402 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5de7 0xc0033a5de8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.242,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e6cd1a7c25c07031e6beb72dd63adb8a8b9f347c3279655b980cd05b3acc8401,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-mlnpr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mlnpr webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-mlnpr f98082c4-ce0b-4d53-a1af-b13be4edf3f3 8955411 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0033a5f97 0xc0033a5f98}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.243\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.243,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5cb8b329d0f589eb6a6f299d8881fbfb03e22843d22c3ea32603b250bd81aca8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.274: INFO: Pod "webserver-deployment-dd94f59b7-n58kc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-n58kc webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-n58kc f8497937-a660-462d-9e19-c62976c4259a 8955399 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444147 0xc003444148}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.187,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c36bbadd482083639251f1acfa7147bd1563ce6169ae6bd3b31d8f5b0a4afb32,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-ntk9z" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ntk9z webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-ntk9z 99a7f729-f604-4ea4-b194-4e02ae3c3f7d 8955433 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0034442f7 0xc0034442f8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.244,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://341a2bd52225019ce9825158e8f1cfa8f714df375bba81a4e982bb9f9c470179,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-qmccv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qmccv webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-qmccv 3df646f8-1e47-486b-a964-575d412c36c7 8955624 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0034444a7 0xc0034444a8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-qz28w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qz28w webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-qz28w b33e7e43-ec2d-4a8e-b068-f1635e4d8f32 8955627 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444637 0xc003444638}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-rrskz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rrskz webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-rrskz 0d7ff95d-0250-4642-a469-1038b2803922 8955612 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0034447c7 0xc0034447c8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-s2wwd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s2wwd webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-s2wwd 38cf583a-8edf-4114-88f1-2cc6aae00e77 8955634 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444957 0xc003444958}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.275: INFO: Pod "webserver-deployment-dd94f59b7-stm8r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-stm8r webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-stm8r ab6ef2ea-46f9-4b17-be33-4ce08c929bff 8955614 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444ae7 0xc003444ae8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.276: INFO: Pod "webserver-deployment-dd94f59b7-v5jm4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v5jm4 webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-v5jm4 a35946c1-317d-491f-bf51-4a7f43d06346 8955448 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444c77 0xc003444c78}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.188\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.188,StartTime:2020-10-27 10:32:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://be92e72efb421cf695b460d753cb77f63440b1b36a5560a29584fcbad091e69d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.188,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.276: INFO: Pod "webserver-deployment-dd94f59b7-vfrf2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vfrf2 webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-vfrf2 799c6a1d-1698-46d7-83a6-6f1b1ad9dfa9 8955660 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444e27 0xc003444e28}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.276: INFO: Pod "webserver-deployment-dd94f59b7-vp9ct" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vp9ct webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-vp9ct b91b5ab8-2ee7-495f-ab98-ea95779b987d 8955377 0 2020-10-27 10:32:17 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003444fb7 0xc003444fb8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.241\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.241,StartTime:2020-10-27 10:32:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:32:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://28dbf45d191943236acc66d8964ba576da83ca5b1c56dd58691a56ce5037377c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.276: INFO: Pod "webserver-deployment-dd94f59b7-w584g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-w584g webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-w584g 53e830f0-5d6c-4452-b74e-d5bd0d951e55 8955636 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc003445167 0xc003445168}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:32:39.276: INFO: Pod "webserver-deployment-dd94f59b7-zdd79" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zdd79 webserver-deployment-dd94f59b7- deployment-3822 /api/v1/namespaces/deployment-3822/pods/webserver-deployment-dd94f59b7-zdd79 3ed73b1a-0790-467b-9f9e-ac0abc1cbd53 8955661 0 2020-10-27 10:32:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 651a8278-ebca-4933-b937-e825e88ba9e2 0xc0034452f7 0xc0034452f8}] [] [{kube-controller-manager Update v1 2020-10-27 10:32:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"651a8278-ebca-4933-b937-e825e88ba9e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:32:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q484v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q484v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q484v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:32:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 10:32:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:32:39.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3822" for this suite. • [SLOW TEST:22.081 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":8,"skipped":129,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:32:39.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 27 10:32:56.849: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:32:56.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7820" for this suite. • [SLOW TEST:17.337 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":145,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:32:57.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 27 10:32:57.276: INFO: Waiting up to 5m0s for pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76" in namespace "emptydir-4124" to be "Succeeded or Failed" Oct 27 10:32:57.334: INFO: Pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76": Phase="Pending", Reason="", readiness=false. Elapsed: 58.680878ms Oct 27 10:32:59.669: INFO: Pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393676761s Oct 27 10:33:01.973: INFO: Pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.696960613s Oct 27 10:33:04.183: INFO: Pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.907355391s STEP: Saw pod success Oct 27 10:33:04.183: INFO: Pod "pod-befd8c3a-83d3-4547-9b70-44e8cb174e76" satisfied condition "Succeeded or Failed" Oct 27 10:33:04.381: INFO: Trying to get logs from node kali-worker2 pod pod-befd8c3a-83d3-4547-9b70-44e8cb174e76 container test-container: STEP: delete the pod Oct 27 10:33:05.451: INFO: Waiting for pod pod-befd8c3a-83d3-4547-9b70-44e8cb174e76 to disappear Oct 27 10:33:05.738: INFO: Pod pod-befd8c3a-83d3-4547-9b70-44e8cb174e76 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:05.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4124" for this suite. • [SLOW TEST:9.096 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:06.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 27 10:33:07.824: INFO: Waiting up to 5m0s for pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6" in namespace "emptydir-2680" to be "Succeeded or Failed" Oct 27 10:33:08.158: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 333.83386ms Oct 27 10:33:10.201: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376986884s Oct 27 10:33:12.423: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.599429481s Oct 27 10:33:14.633: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.809474794s Oct 27 10:33:16.861: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.037387378s STEP: Saw pod success Oct 27 10:33:16.861: INFO: Pod "pod-223fba43-0323-4f7a-836a-00e5dc24a7d6" satisfied condition "Succeeded or Failed" Oct 27 10:33:16.949: INFO: Trying to get logs from node kali-worker pod pod-223fba43-0323-4f7a-836a-00e5dc24a7d6 container test-container: STEP: delete the pod Oct 27 10:33:17.550: INFO: Waiting for pod pod-223fba43-0323-4f7a-836a-00e5dc24a7d6 to disappear Oct 27 10:33:17.565: INFO: Pod pod-223fba43-0323-4f7a-836a-00e5dc24a7d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:17.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2680" for this suite. • [SLOW TEST:11.546 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":168,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:17.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 10:33:18.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c" in namespace "downward-api-6058" to be "Succeeded or Failed" Oct 27 10:33:18.800: INFO: Pod "downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.262547ms Oct 27 10:33:20.804: INFO: Pod "downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087410016s Oct 27 10:33:23.022: INFO: Pod "downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.304853951s STEP: Saw pod success Oct 27 10:33:23.022: INFO: Pod "downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c" satisfied condition "Succeeded or Failed" Oct 27 10:33:23.032: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c container client-container: STEP: delete the pod Oct 27 10:33:23.074: INFO: Waiting for pod downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c to disappear Oct 27 10:33:23.092: INFO: Pod downwardapi-volume-fbc312c6-f1c6-4acd-b86c-f1a7b2f7cf2c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:23.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6058" for this suite. • [SLOW TEST:5.367 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":178,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:23.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:27.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7855" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":181,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:27.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-6d551771-c26c-4145-b570-be00e73bf547 STEP: Creating secret with name s-test-opt-upd-3ddf12ed-0b3b-4d53-86cc-398830ff6a88 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6d551771-c26c-4145-b570-be00e73bf547 STEP: Updating secret s-test-opt-upd-3ddf12ed-0b3b-4d53-86cc-398830ff6a88 STEP: Creating secret with name s-test-opt-create-035f1036-f999-4aff-a886-2e8a61a36cc5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9469" for this suite. • [SLOW TEST:12.658 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":195,"failed":0} [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:40.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 27 10:33:40.204: INFO: namespace kubectl-2651 Oct 27 10:33:40.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2651' Oct 27 10:33:40.579: INFO: stderr: "" Oct 27 10:33:40.579: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 27 10:33:41.583: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:33:41.583: INFO: Found 0 / 1 Oct 27 10:33:42.583: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:33:42.583: INFO: Found 0 / 1 Oct 27 10:33:43.602: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:33:43.603: INFO: Found 1 / 1 Oct 27 10:33:43.603: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 27 10:33:43.605: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:33:43.605: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 27 10:33:43.605: INFO: wait on agnhost-primary startup in kubectl-2651 Oct 27 10:33:43.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs agnhost-primary-hkc5v agnhost-primary --namespace=kubectl-2651' Oct 27 10:33:43.832: INFO: stderr: "" Oct 27 10:33:43.832: INFO: stdout: "Paused\n" STEP: exposing RC Oct 27 10:33:43.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2651' Oct 27 10:33:44.010: INFO: stderr: "" Oct 27 10:33:44.010: INFO: stdout: "service/rm2 exposed\n" Oct 27 10:33:44.099: INFO: Service rm2 in namespace kubectl-2651 found. STEP: exposing service Oct 27 10:33:46.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2651' Oct 27 10:33:46.497: INFO: stderr: "" Oct 27 10:33:46.497: INFO: stdout: "service/rm3 exposed\n" Oct 27 10:33:47.105: INFO: Service rm3 in namespace kubectl-2651 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:49.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2651" for this suite. • [SLOW TEST:9.142 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":15,"skipped":195,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:49.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 27 10:33:49.376: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:33:57.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9079" for this suite. • [SLOW TEST:8.623 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":16,"skipped":196,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:33:57.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:33:58.394: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:02.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1706" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":207,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:02.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 27 10:34:02.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:14.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3739" for this suite. • [SLOW TEST:12.116 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":18,"skipped":209,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:14.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-926 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-926 STEP: Deleting pre-stop pod Oct 27 10:34:31.325: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:31.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-926" for this suite. • [SLOW TEST:16.638 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":19,"skipped":212,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:31.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:34:31.448: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81" in namespace "security-context-test-31" to be "Succeeded or Failed" Oct 27 10:34:31.461: INFO: Pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81": Phase="Pending", Reason="", readiness=false. Elapsed: 12.956041ms Oct 27 10:34:33.466: INFO: Pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017357376s Oct 27 10:34:35.502: INFO: Pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054174833s Oct 27 10:34:37.507: INFO: Pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05829238s Oct 27 10:34:37.507: INFO: Pod "alpine-nnp-false-b0e7b661-04d0-40cd-bad4-d96901e49d81" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:37.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-31" for this suite. • [SLOW TEST:6.136 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":219,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:37.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-0b071b94-b6d5-4a46-b285-9655558f9bdb STEP: Creating a pod to test consume configMaps Oct 27 10:34:37.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a" in namespace "projected-7695" to be "Succeeded or Failed" Oct 27 10:34:37.670: INFO: Pod "pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.107341ms Oct 27 10:34:39.687: INFO: Pod "pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034102034s Oct 27 10:34:41.691: INFO: Pod "pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037586936s STEP: Saw pod success Oct 27 10:34:41.691: INFO: Pod "pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a" satisfied condition "Succeeded or Failed" Oct 27 10:34:41.693: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a container projected-configmap-volume-test: STEP: delete the pod Oct 27 10:34:41.739: INFO: Waiting for pod pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a to disappear Oct 27 10:34:41.764: INFO: Pod pod-projected-configmaps-3f178a61-30ca-4d1f-b3af-6e7dfbff0f1a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:41.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7695" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":225,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:41.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:34:42.173: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7f825c9e-186a-41ff-bddd-a4687aa2a52c" in namespace "security-context-test-737" to be "Succeeded or Failed" Oct 27 10:34:42.207: INFO: Pod "busybox-user-65534-7f825c9e-186a-41ff-bddd-a4687aa2a52c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.785129ms Oct 27 10:34:44.347: INFO: Pod "busybox-user-65534-7f825c9e-186a-41ff-bddd-a4687aa2a52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173683806s Oct 27 10:34:46.351: INFO: Pod "busybox-user-65534-7f825c9e-186a-41ff-bddd-a4687aa2a52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178506513s Oct 27 10:34:46.352: INFO: Pod "busybox-user-65534-7f825c9e-186a-41ff-bddd-a4687aa2a52c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:34:46.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-737" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":236,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:34:46.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 27 10:34:46.409: INFO: PodSpec: initContainers in spec.initContainers Oct 27 10:35:36.730: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6e8abdad-076c-4f25-9aee-19414aced3e2", GenerateName:"", Namespace:"init-container-8746", SelfLink:"/api/v1/namespaces/init-container-8746/pods/pod-init-6e8abdad-076c-4f25-9aee-19414aced3e2", UID:"7b553e51-ec54-4fdb-877f-4a8a6a4a3e85", ResourceVersion:"8957185", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63739391686, loc:(*time.Location)(0x7701840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"409129391"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003284040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003284060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003284080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0032840a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-thtqc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0033ba000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thtqc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thtqc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-thtqc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0030e6098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ee8000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030e6120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0030e6170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0030e6178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0030e617c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002ffe2b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391686, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391686, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391686, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739391686, loc:(*time.Location)(0x7701840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.1.216", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.216"}}, StartTime:(*v1.Time)(0xc0032840c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ee80e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ee8150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ce10f6f885c21077b5496b04c6a0f4d02b3f2159f48412e46c7206eb28187fa2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003284100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032840e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0030e621f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:35:36.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8746" for this suite. • [SLOW TEST:50.423 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":23,"skipped":240,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:35:36.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-9e410799-2e43-4de7-8b78-3d1590d8f1b2 STEP: Creating a pod to test consume configMaps Oct 27 10:35:36.912: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091" in namespace "projected-9499" to be "Succeeded or Failed" Oct 27 10:35:37.030: INFO: Pod "pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091": Phase="Pending", Reason="", readiness=false. Elapsed: 118.182058ms Oct 27 10:35:39.126: INFO: Pod "pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213717489s Oct 27 10:35:41.233: INFO: Pod "pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.321411989s STEP: Saw pod success Oct 27 10:35:41.233: INFO: Pod "pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091" satisfied condition "Succeeded or Failed" Oct 27 10:35:41.236: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091 container projected-configmap-volume-test: STEP: delete the pod Oct 27 10:35:41.303: INFO: Waiting for pod pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091 to disappear Oct 27 10:35:41.305: INFO: Pod pod-projected-configmaps-810af29b-a19c-4856-8d62-416f3a4e8091 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:35:41.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9499" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:35:41.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-f169846e-bfe7-4629-bb5c-3870f4d56eb2 STEP: Creating a pod to test consume configMaps Oct 27 10:35:41.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca" in namespace "configmap-1504" to be "Succeeded or Failed" Oct 27 10:35:41.434: INFO: Pod "pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.110533ms Oct 27 10:35:43.498: INFO: Pod "pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075871542s Oct 27 10:35:45.502: INFO: Pod "pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080347507s STEP: Saw pod success Oct 27 10:35:45.502: INFO: Pod "pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca" satisfied condition "Succeeded or Failed" Oct 27 10:35:45.506: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca container configmap-volume-test: STEP: delete the pod Oct 27 10:35:45.587: INFO: Waiting for pod pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca to disappear Oct 27 10:35:45.605: INFO: Pod pod-configmaps-8a23a867-97dc-45ce-b022-d0540d39a8ca no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:35:45.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1504" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":280,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:35:45.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5086 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5086 STEP: creating replication controller externalsvc in namespace services-5086 I1027 10:35:46.192824 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5086, replica count: 2 I1027 10:35:49.243318 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:35:52.243574 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 27 10:35:52.300: INFO: Creating new exec pod Oct 27 10:35:56.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5086 execpod6stnd -- /bin/sh -x -c nslookup nodeport-service.services-5086.svc.cluster.local' Oct 27 10:35:56.618: INFO: stderr: "I1027 10:35:56.533408 173 log.go:181] (0xc0008dedc0) (0xc0003fb7c0) Create stream\nI1027 10:35:56.533469 173 log.go:181] (0xc0008dedc0) (0xc0003fb7c0) Stream added, broadcasting: 1\nI1027 10:35:56.538797 173 log.go:181] (0xc0008dedc0) Reply frame received for 1\nI1027 10:35:56.538843 173 log.go:181] (0xc0008dedc0) (0xc000308500) Create stream\nI1027 10:35:56.538855 173 log.go:181] (0xc0008dedc0) (0xc000308500) Stream added, broadcasting: 3\nI1027 10:35:56.539877 173 log.go:181] (0xc0008dedc0) Reply frame received for 3\nI1027 10:35:56.539917 173 log.go:181] (0xc0008dedc0) (0xc000308c80) Create stream\nI1027 10:35:56.539932 173 log.go:181] (0xc0008dedc0) (0xc000308c80) Stream added, broadcasting: 5\nI1027 10:35:56.540992 173 log.go:181] (0xc0008dedc0) Reply frame received for 5\nI1027 10:35:56.598523 173 log.go:181] (0xc0008dedc0) Data frame received for 5\nI1027 10:35:56.598551 173 log.go:181] (0xc000308c80) (5) Data frame handling\nI1027 10:35:56.598567 173 log.go:181] (0xc000308c80) (5) Data frame sent\n+ nslookup nodeport-service.services-5086.svc.cluster.local\nI1027 10:35:56.609084 173 log.go:181] (0xc0008dedc0) Data frame received for 3\nI1027 10:35:56.609106 173 log.go:181] (0xc000308500) (3) Data frame handling\nI1027 10:35:56.609132 173 log.go:181] (0xc000308500) (3) Data frame sent\nI1027 10:35:56.610297 173 log.go:181] (0xc0008dedc0) Data frame received for 3\nI1027 10:35:56.610321 173 log.go:181] (0xc000308500) (3) Data frame handling\nI1027 10:35:56.610350 173 log.go:181] (0xc000308500) (3) Data frame sent\nI1027 10:35:56.610615 173 log.go:181] (0xc0008dedc0) Data frame received for 5\nI1027 10:35:56.610638 173 log.go:181] (0xc000308c80) (5) Data frame handling\nI1027 10:35:56.610992 173 log.go:181] (0xc0008dedc0) Data frame received for 3\nI1027 10:35:56.611021 173 log.go:181] (0xc000308500) (3) Data frame handling\nI1027 10:35:56.612393 173 log.go:181] (0xc0008dedc0) Data frame received for 1\nI1027 10:35:56.612415 173 log.go:181] (0xc0003fb7c0) (1) Data frame handling\nI1027 10:35:56.612439 173 log.go:181] (0xc0003fb7c0) (1) Data frame sent\nI1027 10:35:56.612575 173 log.go:181] (0xc0008dedc0) (0xc0003fb7c0) Stream removed, broadcasting: 1\nI1027 10:35:56.612602 173 log.go:181] (0xc0008dedc0) Go away received\nI1027 10:35:56.613000 173 log.go:181] (0xc0008dedc0) (0xc0003fb7c0) Stream removed, broadcasting: 1\nI1027 10:35:56.613015 173 log.go:181] (0xc0008dedc0) (0xc000308500) Stream removed, broadcasting: 3\nI1027 10:35:56.613021 173 log.go:181] (0xc0008dedc0) (0xc000308c80) Stream removed, broadcasting: 5\n" Oct 27 10:35:56.619: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5086.svc.cluster.local\tcanonical name = externalsvc.services-5086.svc.cluster.local.\nName:\texternalsvc.services-5086.svc.cluster.local\nAddress: 10.101.203.60\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5086, will wait for the garbage collector to delete the pods Oct 27 10:35:56.681: INFO: Deleting ReplicationController externalsvc took: 7.822658ms Oct 27 10:35:57.081: INFO: Terminating ReplicationController externalsvc pods took: 400.40179ms Oct 27 10:36:08.737: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:08.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5086" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.223 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":26,"skipped":298,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:08.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9dfd58d1-dedf-4744-9245-474026e433dc STEP: Creating a pod to test consume secrets Oct 27 10:36:09.097: INFO: Waiting up to 5m0s for pod "pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0" in namespace "secrets-5989" to be "Succeeded or Failed" Oct 27 10:36:09.104: INFO: Pod "pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.946455ms Oct 27 10:36:11.324: INFO: Pod "pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227006463s Oct 27 10:36:13.329: INFO: Pod "pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.232038723s STEP: Saw pod success Oct 27 10:36:13.329: INFO: Pod "pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0" satisfied condition "Succeeded or Failed" Oct 27 10:36:13.332: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0 container secret-volume-test: STEP: delete the pod Oct 27 10:36:13.379: INFO: Waiting for pod pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0 to disappear Oct 27 10:36:13.396: INFO: Pod pod-secrets-7efb3d5f-3eaa-47d5-b7e9-78286bb5f3d0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:13.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5989" for this suite. STEP: Destroying namespace "secret-namespace-1202" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":314,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:13.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:36:13.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4357' Oct 27 10:36:13.854: INFO: stderr: "" Oct 27 10:36:13.854: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 27 10:36:13.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4357' Oct 27 10:36:14.217: INFO: stderr: "" Oct 27 10:36:14.217: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 27 10:36:15.222: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:36:15.222: INFO: Found 0 / 1 Oct 27 10:36:16.222: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:36:16.222: INFO: Found 0 / 1 Oct 27 10:36:17.221: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:36:17.221: INFO: Found 1 / 1 Oct 27 10:36:17.221: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 27 10:36:17.225: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 10:36:17.225: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 27 10:36:17.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe pod agnhost-primary-rv4qv --namespace=kubectl-4357' Oct 27 10:36:17.359: INFO: stderr: "" Oct 27 10:36:17.360: INFO: stdout: "Name: agnhost-primary-rv4qv\nNamespace: kubectl-4357\nPriority: 0\nNode: kali-worker2/172.18.0.13\nStart Time: Tue, 27 Oct 2020 10:36:13 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.222\nIPs:\n IP: 10.244.1.222\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://7447eeacb636b0d93f2359c3384128234be31ce9cd787ec70a5ba9d8f0aa0dcb\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 27 Oct 2020 10:36:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4qsw9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4qsw9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4qsw9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-4357/agnhost-primary-rv4qv to kali-worker2\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 27 10:36:17.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-4357' Oct 27 10:36:17.501: INFO: stderr: "" Oct 27 10:36:17.501: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4357\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-rv4qv\n" Oct 27 10:36:17.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-4357' Oct 27 10:36:17.626: INFO: stderr: "" Oct 27 10:36:17.626: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4357\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.102.7.48\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.222:6379\nSession Affinity: None\nEvents: \n" Oct 27 10:36:17.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe node kali-control-plane' Oct 27 10:36:17.856: INFO: stderr: "" Oct 27 10:36:17.856: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:28:40 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Tue, 27 Oct 2020 10:36:17 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 27 Oct 2020 10:32:08 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 27 Oct 2020 10:32:08 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 27 Oct 2020 10:32:08 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 27 Oct 2020 10:32:08 +0000 Wed, 23 Sep 2020 08:29:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: f18d6a3b53c14eaca999fce1081671aa\n System UUID: e919c2db-6960-4f78-a4d1-1e39795c20e3\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-6cvzb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system coredns-f9fd979d6-zzb7k 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kindnet-mx6h2 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 34d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-proxy-x4lnq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n local-path-storage local-path-provisioner-78776bfc44-sm58q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 27 10:36:17.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe namespace kubectl-4357' Oct 27 10:36:17.969: INFO: stderr: "" Oct 27 10:36:17.969: INFO: stdout: "Name: kubectl-4357\nLabels: e2e-framework=kubectl\n e2e-run=4362a309-be96-4b27-a074-633bc102f0de\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:17.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4357" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":28,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:17.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-c7e4cedf-babf-4dcb-9341-4dc74dfc783a STEP: Creating a pod to test consume configMaps Oct 27 10:36:18.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1" in namespace "projected-3797" to be "Succeeded or Failed" Oct 27 10:36:18.109: INFO: Pod "pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 46.172619ms Oct 27 10:36:20.114: INFO: Pod "pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05028354s Oct 27 10:36:22.117: INFO: Pod "pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053254391s STEP: Saw pod success Oct 27 10:36:22.117: INFO: Pod "pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1" satisfied condition "Succeeded or Failed" Oct 27 10:36:22.118: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1 container projected-configmap-volume-test: STEP: delete the pod Oct 27 10:36:22.209: INFO: Waiting for pod pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1 to disappear Oct 27 10:36:22.223: INFO: Pod pod-projected-configmaps-c0a89fc1-06aa-4a1d-9f80-2f38d7dca9e1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:22.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3797" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:22.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 10:36:22.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04" in namespace "downward-api-3483" to be "Succeeded or Failed" Oct 27 10:36:22.408: INFO: Pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04": Phase="Pending", Reason="", readiness=false. Elapsed: 9.440714ms Oct 27 10:36:24.665: INFO: Pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26618183s Oct 27 10:36:26.669: INFO: Pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04": Phase="Running", Reason="", readiness=true. Elapsed: 4.270655089s Oct 27 10:36:28.713: INFO: Pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.314223894s STEP: Saw pod success Oct 27 10:36:28.713: INFO: Pod "downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04" satisfied condition "Succeeded or Failed" Oct 27 10:36:28.719: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04 container client-container: STEP: delete the pod Oct 27 10:36:28.759: INFO: Waiting for pod downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04 to disappear Oct 27 10:36:28.775: INFO: Pod downwardapi-volume-8ea6cca9-55cd-4a02-88bc-c1a62ad19d04 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:28.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3483" for this suite. • [SLOW TEST:6.555 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":389,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:28.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-80fa21b0-ddc8-43db-854f-ee19e4a3aec9 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:28.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-225" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":31,"skipped":397,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:28.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-gthr STEP: Creating a pod to test atomic-volume-subpath Oct 27 10:36:29.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gthr" in namespace "subpath-8124" to be "Succeeded or Failed" Oct 27 10:36:29.081: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18508ms Oct 27 10:36:31.086: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009010565s Oct 27 10:36:33.091: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 4.014453316s Oct 27 10:36:35.095: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 6.018140088s Oct 27 10:36:37.099: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 8.022374929s Oct 27 10:36:39.103: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 10.026531905s Oct 27 10:36:41.107: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 12.030282726s Oct 27 10:36:43.111: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 14.034114879s Oct 27 10:36:45.116: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 16.039190927s Oct 27 10:36:47.120: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 18.043773143s Oct 27 10:36:49.125: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 20.048836189s Oct 27 10:36:51.129: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Running", Reason="", readiness=true. Elapsed: 22.052749232s Oct 27 10:36:53.134: INFO: Pod "pod-subpath-test-configmap-gthr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056864794s STEP: Saw pod success Oct 27 10:36:53.134: INFO: Pod "pod-subpath-test-configmap-gthr" satisfied condition "Succeeded or Failed" Oct 27 10:36:53.136: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-gthr container test-container-subpath-configmap-gthr: STEP: delete the pod Oct 27 10:36:53.285: INFO: Waiting for pod pod-subpath-test-configmap-gthr to disappear Oct 27 10:36:53.363: INFO: Pod pod-subpath-test-configmap-gthr no longer exists STEP: Deleting pod pod-subpath-test-configmap-gthr Oct 27 10:36:53.363: INFO: Deleting pod "pod-subpath-test-configmap-gthr" in namespace "subpath-8124" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:53.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8124" for this suite. • [SLOW TEST:24.409 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":32,"skipped":408,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:53.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 27 10:36:53.627: INFO: Waiting up to 5m0s for pod "downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0" in namespace "downward-api-6872" to be "Succeeded or Failed" Oct 27 10:36:53.632: INFO: Pod "downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.156298ms Oct 27 10:36:55.637: INFO: Pod "downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009821175s Oct 27 10:36:57.641: INFO: Pod "downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014129069s STEP: Saw pod success Oct 27 10:36:57.641: INFO: Pod "downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0" satisfied condition "Succeeded or Failed" Oct 27 10:36:57.645: INFO: Trying to get logs from node kali-worker2 pod downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0 container dapi-container: STEP: delete the pod Oct 27 10:36:57.763: INFO: Waiting for pod downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0 to disappear Oct 27 10:36:57.775: INFO: Pod downward-api-bd13ecef-e124-4e94-b3e0-af64f4b19ad0 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:57.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6872" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":411,"failed":0} S ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:57.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Oct 27 10:36:57.869: INFO: Major version: 1 STEP: Confirm minor version Oct 27 10:36:57.869: INFO: cleanMinorVersion: 19 Oct 27 10:36:57.869: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:36:57.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9470" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":34,"skipped":412,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:36:57.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 27 10:37:03.178: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:37:03.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5052" for this suite. • [SLOW TEST:5.901 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":35,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:37:03.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6896 STEP: creating service affinity-nodeport-transition in namespace services-6896 STEP: creating replication controller affinity-nodeport-transition in namespace services-6896 I1027 10:37:04.241235 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6896, replica count: 3 I1027 10:37:07.291713 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:37:10.291914 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 10:37:10.299: INFO: Creating new exec pod Oct 27 10:37:17.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Oct 27 10:37:17.625: INFO: stderr: "I1027 10:37:17.532220 317 log.go:181] (0xc00003bc30) (0xc0005c6a00) Create stream\nI1027 10:37:17.532269 317 log.go:181] (0xc00003bc30) (0xc0005c6a00) Stream added, broadcasting: 1\nI1027 10:37:17.535148 317 log.go:181] (0xc00003bc30) Reply frame received for 1\nI1027 10:37:17.535354 317 log.go:181] (0xc00003bc30) (0xc000730820) Create stream\nI1027 10:37:17.535434 317 log.go:181] (0xc00003bc30) (0xc000730820) Stream added, broadcasting: 3\nI1027 10:37:17.537508 317 log.go:181] (0xc00003bc30) Reply frame received for 3\nI1027 10:37:17.537559 317 log.go:181] (0xc00003bc30) (0xc0005c6000) Create stream\nI1027 10:37:17.537591 317 log.go:181] (0xc00003bc30) (0xc0005c6000) Stream added, broadcasting: 5\nI1027 10:37:17.538986 317 log.go:181] (0xc00003bc30) Reply frame received for 5\nI1027 10:37:17.617690 317 log.go:181] (0xc00003bc30) Data frame received for 5\nI1027 10:37:17.617714 317 log.go:181] (0xc0005c6000) (5) Data frame handling\nI1027 10:37:17.617728 317 log.go:181] (0xc0005c6000) (5) Data frame sent\nI1027 10:37:17.617735 317 log.go:181] (0xc00003bc30) Data frame received for 5\nI1027 10:37:17.617740 317 log.go:181] (0xc0005c6000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1027 10:37:17.617757 317 log.go:181] (0xc0005c6000) (5) Data frame sent\nI1027 10:37:17.618160 317 log.go:181] (0xc00003bc30) Data frame received for 3\nI1027 10:37:17.618188 317 log.go:181] (0xc000730820) (3) Data frame handling\nI1027 10:37:17.618215 317 log.go:181] (0xc00003bc30) Data frame received for 5\nI1027 10:37:17.618221 317 log.go:181] (0xc0005c6000) (5) Data frame handling\nI1027 10:37:17.619784 317 log.go:181] (0xc00003bc30) Data frame received for 1\nI1027 10:37:17.619812 317 log.go:181] (0xc0005c6a00) (1) Data frame handling\nI1027 10:37:17.619822 317 log.go:181] (0xc0005c6a00) (1) Data frame sent\nI1027 10:37:17.619832 317 log.go:181] (0xc00003bc30) (0xc0005c6a00) Stream removed, broadcasting: 1\nI1027 10:37:17.619852 317 log.go:181] (0xc00003bc30) Go away received\nI1027 10:37:17.620248 317 log.go:181] (0xc00003bc30) (0xc0005c6a00) Stream removed, broadcasting: 1\nI1027 10:37:17.620274 317 log.go:181] (0xc00003bc30) (0xc000730820) Stream removed, broadcasting: 3\nI1027 10:37:17.620282 317 log.go:181] (0xc00003bc30) (0xc0005c6000) Stream removed, broadcasting: 5\n" Oct 27 10:37:17.625: INFO: stdout: "" Oct 27 10:37:17.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c nc -zv -t -w 2 10.109.145.32 80' Oct 27 10:37:17.834: INFO: stderr: "I1027 10:37:17.756500 334 log.go:181] (0xc00003a0b0) (0xc000cc01e0) Create stream\nI1027 10:37:17.756598 334 log.go:181] (0xc00003a0b0) (0xc000cc01e0) Stream added, broadcasting: 1\nI1027 10:37:17.758473 334 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1027 10:37:17.758541 334 log.go:181] (0xc00003a0b0) (0xc000cc0280) Create stream\nI1027 10:37:17.758578 334 log.go:181] (0xc00003a0b0) (0xc000cc0280) Stream added, broadcasting: 3\nI1027 10:37:17.759420 334 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1027 10:37:17.759457 334 log.go:181] (0xc00003a0b0) (0xc000cc0320) Create stream\nI1027 10:37:17.759468 334 log.go:181] (0xc00003a0b0) (0xc000cc0320) Stream added, broadcasting: 5\nI1027 10:37:17.760199 334 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1027 10:37:17.829392 334 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 10:37:17.829428 334 log.go:181] (0xc000cc0280) (3) Data frame handling\nI1027 10:37:17.829490 334 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 10:37:17.829508 334 log.go:181] (0xc000cc0320) (5) Data frame handling\nI1027 10:37:17.829524 334 log.go:181] (0xc000cc0320) (5) Data frame sent\nI1027 10:37:17.829540 334 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 10:37:17.829547 334 log.go:181] (0xc000cc0320) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.145.32 80\nConnection to 10.109.145.32 80 port [tcp/http] succeeded!\nI1027 10:37:17.830663 334 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1027 10:37:17.830682 334 log.go:181] (0xc000cc01e0) (1) Data frame handling\nI1027 10:37:17.830695 334 log.go:181] (0xc000cc01e0) (1) Data frame sent\nI1027 10:37:17.830718 334 log.go:181] (0xc00003a0b0) (0xc000cc01e0) Stream removed, broadcasting: 1\nI1027 10:37:17.830868 334 log.go:181] (0xc00003a0b0) Go away received\nI1027 10:37:17.831048 334 log.go:181] (0xc00003a0b0) (0xc000cc01e0) Stream removed, broadcasting: 1\nI1027 10:37:17.831068 334 log.go:181] (0xc00003a0b0) (0xc000cc0280) Stream removed, broadcasting: 3\nI1027 10:37:17.831074 334 log.go:181] (0xc00003a0b0) (0xc000cc0320) Stream removed, broadcasting: 5\n" Oct 27 10:37:17.834: INFO: stdout: "" Oct 27 10:37:17.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32191' Oct 27 10:37:18.070: INFO: stderr: "I1027 10:37:17.976241 352 log.go:181] (0xc000eb4f20) (0xc000464280) Create stream\nI1027 10:37:17.976317 352 log.go:181] (0xc000eb4f20) (0xc000464280) Stream added, broadcasting: 1\nI1027 10:37:17.981559 352 log.go:181] (0xc000eb4f20) Reply frame received for 1\nI1027 10:37:17.981589 352 log.go:181] (0xc000eb4f20) (0xc000464dc0) Create stream\nI1027 10:37:17.981597 352 log.go:181] (0xc000eb4f20) (0xc000464dc0) Stream added, broadcasting: 3\nI1027 10:37:17.982659 352 log.go:181] (0xc000eb4f20) Reply frame received for 3\nI1027 10:37:17.982689 352 log.go:181] (0xc000eb4f20) (0xc000724280) Create stream\nI1027 10:37:17.982698 352 log.go:181] (0xc000eb4f20) (0xc000724280) Stream added, broadcasting: 5\nI1027 10:37:17.983743 352 log.go:181] (0xc000eb4f20) Reply frame received for 5\nI1027 10:37:18.060402 352 log.go:181] (0xc000eb4f20) Data frame received for 3\nI1027 10:37:18.060516 352 log.go:181] (0xc000464dc0) (3) Data frame handling\nI1027 10:37:18.062205 352 log.go:181] (0xc000eb4f20) Data frame received for 5\nI1027 10:37:18.062226 352 log.go:181] (0xc000724280) (5) Data frame handling\nI1027 10:37:18.062255 352 log.go:181] (0xc000724280) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 32191\nConnection to 172.18.0.12 32191 port [tcp/32191] succeeded!\nI1027 10:37:18.063812 352 log.go:181] (0xc000eb4f20) Data frame received for 5\nI1027 10:37:18.063877 352 log.go:181] (0xc000724280) (5) Data frame handling\nI1027 10:37:18.065477 352 log.go:181] (0xc000eb4f20) Data frame received for 1\nI1027 10:37:18.065493 352 log.go:181] (0xc000464280) (1) Data frame handling\nI1027 10:37:18.065513 352 log.go:181] (0xc000464280) (1) Data frame sent\nI1027 10:37:18.065523 352 log.go:181] (0xc000eb4f20) (0xc000464280) Stream removed, broadcasting: 1\nI1027 10:37:18.065805 352 log.go:181] (0xc000eb4f20) (0xc000464280) Stream removed, broadcasting: 1\nI1027 10:37:18.065818 352 log.go:181] (0xc000eb4f20) (0xc000464dc0) Stream removed, broadcasting: 3\nI1027 10:37:18.065918 352 log.go:181] (0xc000eb4f20) (0xc000724280) Stream removed, broadcasting: 5\n" Oct 27 10:37:18.070: INFO: stdout: "" Oct 27 10:37:18.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32191' Oct 27 10:37:18.308: INFO: stderr: "I1027 10:37:18.218680 370 log.go:181] (0xc000350fd0) (0xc0009e2280) Create stream\nI1027 10:37:18.218734 370 log.go:181] (0xc000350fd0) (0xc0009e2280) Stream added, broadcasting: 1\nI1027 10:37:18.225226 370 log.go:181] (0xc000350fd0) Reply frame received for 1\nI1027 10:37:18.225284 370 log.go:181] (0xc000350fd0) (0xc0004940a0) Create stream\nI1027 10:37:18.225293 370 log.go:181] (0xc000350fd0) (0xc0004940a0) Stream added, broadcasting: 3\nI1027 10:37:18.226317 370 log.go:181] (0xc000350fd0) Reply frame received for 3\nI1027 10:37:18.226360 370 log.go:181] (0xc000350fd0) (0xc000a480a0) Create stream\nI1027 10:37:18.226376 370 log.go:181] (0xc000350fd0) (0xc000a480a0) Stream added, broadcasting: 5\nI1027 10:37:18.227259 370 log.go:181] (0xc000350fd0) Reply frame received for 5\nI1027 10:37:18.298377 370 log.go:181] (0xc000350fd0) Data frame received for 3\nI1027 10:37:18.298408 370 log.go:181] (0xc0004940a0) (3) Data frame handling\nI1027 10:37:18.298437 370 log.go:181] (0xc000350fd0) Data frame received for 5\nI1027 10:37:18.298445 370 log.go:181] (0xc000a480a0) (5) Data frame handling\nI1027 10:37:18.298460 370 log.go:181] (0xc000a480a0) (5) Data frame sent\nI1027 10:37:18.298467 370 log.go:181] (0xc000350fd0) Data frame received for 5\nI1027 10:37:18.298474 370 log.go:181] (0xc000a480a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32191\nConnection to 172.18.0.13 32191 port [tcp/32191] succeeded!\nI1027 10:37:18.299999 370 log.go:181] (0xc000350fd0) Data frame received for 1\nI1027 10:37:18.300049 370 log.go:181] (0xc0009e2280) (1) Data frame handling\nI1027 10:37:18.300098 370 log.go:181] (0xc0009e2280) (1) Data frame sent\nI1027 10:37:18.300138 370 log.go:181] (0xc000350fd0) (0xc0009e2280) Stream removed, broadcasting: 1\nI1027 10:37:18.300188 370 log.go:181] (0xc000350fd0) Go away received\nI1027 10:37:18.300628 370 log.go:181] (0xc000350fd0) (0xc0009e2280) Stream removed, broadcasting: 1\nI1027 10:37:18.300650 370 log.go:181] (0xc000350fd0) (0xc0004940a0) Stream removed, broadcasting: 3\nI1027 10:37:18.300661 370 log.go:181] (0xc000350fd0) (0xc000a480a0) Stream removed, broadcasting: 5\n" Oct 27 10:37:18.308: INFO: stdout: "" Oct 27 10:37:18.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32191/ ; done' Oct 27 10:37:18.634: INFO: stderr: "I1027 10:37:18.463783 390 log.go:181] (0xc000555550) (0xc00054ca00) Create stream\nI1027 10:37:18.463854 390 log.go:181] (0xc000555550) (0xc00054ca00) Stream added, broadcasting: 1\nI1027 10:37:18.468974 390 log.go:181] (0xc000555550) Reply frame received for 1\nI1027 10:37:18.469043 390 log.go:181] (0xc000555550) (0xc00054c000) Create stream\nI1027 10:37:18.469064 390 log.go:181] (0xc000555550) (0xc00054c000) Stream added, broadcasting: 3\nI1027 10:37:18.469919 390 log.go:181] (0xc000555550) Reply frame received for 3\nI1027 10:37:18.469951 390 log.go:181] (0xc000555550) (0xc000314fa0) Create stream\nI1027 10:37:18.469959 390 log.go:181] (0xc000555550) (0xc000314fa0) Stream added, broadcasting: 5\nI1027 10:37:18.470784 390 log.go:181] (0xc000555550) Reply frame received for 5\nI1027 10:37:18.536057 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.536092 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.536104 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.536126 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.536134 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.536140 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.540654 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.540688 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.540707 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.541530 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.541559 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.541573 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.541600 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.541611 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.541633 390 log.go:181] (0xc000314fa0) (5) Data frame sent\nI1027 10:37:18.541650 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.541659 390 log.go:181] (0xc000314fa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.541681 390 log.go:181] (0xc000314fa0) (5) Data frame sent\nI1027 10:37:18.547555 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.547576 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.547594 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.548367 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.548550 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.548575 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.548606 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.548642 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.548665 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.552939 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.552974 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.553013 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.553552 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.553577 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.553587 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.553614 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.553632 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.553658 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.558215 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.558233 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.558246 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.559172 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.559190 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.559197 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.559213 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.559237 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.559264 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.564189 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.564208 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.564227 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.564779 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.564815 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.564830 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.565804 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.565821 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.565839 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.568948 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.568966 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.568982 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.569246 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.569262 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.569271 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.569371 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.569396 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.569422 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.572583 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.572605 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.572620 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.573071 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.573130 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.573147 390 log.go:181] (0xc000314fa0) (5) Data frame sent\nI1027 10:37:18.573156 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.573163 390 log.go:181] (0xc000314fa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.573198 390 log.go:181] (0xc000314fa0) (5) Data frame sent\nI1027 10:37:18.573218 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.573229 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.573241 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.578100 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.578127 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.578146 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.578769 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.578787 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.578798 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.578807 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.578812 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.578818 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.584424 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.584446 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.584462 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.585160 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.585198 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.585216 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.585243 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.585264 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.585287 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.589318 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.589351 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.589386 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.589872 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.589900 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.589914 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.589952 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.589980 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.590010 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.594588 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.594612 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.594626 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.595020 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.595042 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.595074 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.595099 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.595132 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.595161 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.602067 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.602170 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.602212 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.602560 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.602573 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.602579 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.602864 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.602885 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.602906 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.608985 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.609011 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.609041 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.609703 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.609732 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.609749 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.609833 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.609865 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.609880 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.615396 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.615411 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.615422 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.616311 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.616338 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.616354 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.616379 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.616397 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.616407 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.620396 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.620417 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.620447 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.622717 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.622752 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.622765 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.622783 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.622793 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.622802 390 log.go:181] (0xc000314fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.625562 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.625574 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.625581 390 log.go:181] (0xc00054c000) (3) Data frame sent\nI1027 10:37:18.626395 390 log.go:181] (0xc000555550) Data frame received for 5\nI1027 10:37:18.626410 390 log.go:181] (0xc000314fa0) (5) Data frame handling\nI1027 10:37:18.626436 390 log.go:181] (0xc000555550) Data frame received for 3\nI1027 10:37:18.626453 390 log.go:181] (0xc00054c000) (3) Data frame handling\nI1027 10:37:18.628046 390 log.go:181] (0xc000555550) Data frame received for 1\nI1027 10:37:18.628065 390 log.go:181] (0xc00054ca00) (1) Data frame handling\nI1027 10:37:18.628073 390 log.go:181] (0xc00054ca00) (1) Data frame sent\nI1027 10:37:18.628089 390 log.go:181] (0xc000555550) (0xc00054ca00) Stream removed, broadcasting: 1\nI1027 10:37:18.628106 390 log.go:181] (0xc000555550) Go away received\nI1027 10:37:18.628513 390 log.go:181] (0xc000555550) (0xc00054ca00) Stream removed, broadcasting: 1\nI1027 10:37:18.628533 390 log.go:181] (0xc000555550) (0xc00054c000) Stream removed, broadcasting: 3\nI1027 10:37:18.628542 390 log.go:181] (0xc000555550) (0xc000314fa0) Stream removed, broadcasting: 5\n" Oct 27 10:37:18.635: INFO: stdout: "\naffinity-nodeport-transition-snwjp\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-snwjp\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-snwjp\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-dfhzl\naffinity-nodeport-transition-k6sjt" Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-snwjp Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-snwjp Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-snwjp Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-dfhzl Oct 27 10:37:18.635: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6896 execpod-affinity6pdqv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32191/ ; done' Oct 27 10:37:18.946: INFO: stderr: "I1027 10:37:18.775688 408 log.go:181] (0xc00026e000) (0xc0000d2d20) Create stream\nI1027 10:37:18.775748 408 log.go:181] (0xc00026e000) (0xc0000d2d20) Stream added, broadcasting: 1\nI1027 10:37:18.780169 408 log.go:181] (0xc00026e000) Reply frame received for 1\nI1027 10:37:18.780212 408 log.go:181] (0xc00026e000) (0xc000208d20) Create stream\nI1027 10:37:18.780230 408 log.go:181] (0xc00026e000) (0xc000208d20) Stream added, broadcasting: 3\nI1027 10:37:18.781755 408 log.go:181] (0xc00026e000) Reply frame received for 3\nI1027 10:37:18.781784 408 log.go:181] (0xc00026e000) (0xc00043c0a0) Create stream\nI1027 10:37:18.781792 408 log.go:181] (0xc00026e000) (0xc00043c0a0) Stream added, broadcasting: 5\nI1027 10:37:18.782686 408 log.go:181] (0xc00026e000) Reply frame received for 5\nI1027 10:37:18.842654 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.842687 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.842695 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.842713 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.842718 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.842723 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.849230 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.849257 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.849283 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.849912 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.849925 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.849946 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.849959 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.849966 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.849973 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.855661 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.855676 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.855687 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.856320 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.856332 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.856343 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.856351 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.856359 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.856373 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.862990 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.863005 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.863017 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.863590 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.863617 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.863629 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.863656 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.863678 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.863704 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.867559 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.867575 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.867595 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.868519 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.868544 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.868555 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.868574 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.868584 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.868593 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.868603 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.868612 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.868632 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.871533 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.871551 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.871560 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.872083 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.872109 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.872118 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.872131 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.872138 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.872144 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.872152 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.872158 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.872173 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.878144 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.878160 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.878172 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.878897 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.878931 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.878950 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.878960 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.878972 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.878981 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.883136 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.883153 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.883165 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.883639 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.883664 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.883682 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.883859 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.883883 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.883904 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.891021 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.891113 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.891161 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.891995 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.892037 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.892054 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.892072 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.892082 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.892095 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.896446 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.896473 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.896490 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.897046 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.897064 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.897082 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.897088 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.897093 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.897113 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.897150 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.897175 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.897203 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.900334 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.900354 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.900366 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.900828 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.900994 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.901013 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.901034 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.901052 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.901085 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.905661 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.905690 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.905710 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.906286 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.906304 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.906331 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.906342 408 log.go:181] (0xc00026e000) Data frame received for 5\n+ echo\n+ curl -q -sI1027 10:37:18.906354 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.906365 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.906376 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.906394 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.906418 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\nI1027 10:37:18.911333 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.911360 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.911389 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.911905 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.911930 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.911940 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.911952 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.911959 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.911966 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.917661 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.917685 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.917714 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.918259 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.918282 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.918306 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.918416 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.918436 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.918445 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.922916 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.922934 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.922949 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.923542 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.923582 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.923596 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.923618 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.923641 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.923655 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.929419 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.929430 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.929436 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.930249 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.930273 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.930279 408 log.go:181] (0xc00043c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32191/\nI1027 10:37:18.930288 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.930293 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.930297 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.936313 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.936330 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.936341 408 log.go:181] (0xc000208d20) (3) Data frame sent\nI1027 10:37:18.937231 408 log.go:181] (0xc00026e000) Data frame received for 5\nI1027 10:37:18.937262 408 log.go:181] (0xc00043c0a0) (5) Data frame handling\nI1027 10:37:18.937307 408 log.go:181] (0xc00026e000) Data frame received for 3\nI1027 10:37:18.937352 408 log.go:181] (0xc000208d20) (3) Data frame handling\nI1027 10:37:18.938956 408 log.go:181] (0xc00026e000) Data frame received for 1\nI1027 10:37:18.939067 408 log.go:181] (0xc0000d2d20) (1) Data frame handling\nI1027 10:37:18.939102 408 log.go:181] (0xc0000d2d20) (1) Data frame sent\nI1027 10:37:18.939120 408 log.go:181] (0xc00026e000) (0xc0000d2d20) Stream removed, broadcasting: 1\nI1027 10:37:18.939138 408 log.go:181] (0xc00026e000) Go away received\nI1027 10:37:18.939630 408 log.go:181] (0xc00026e000) (0xc0000d2d20) Stream removed, broadcasting: 1\nI1027 10:37:18.939668 408 log.go:181] (0xc00026e000) (0xc000208d20) Stream removed, broadcasting: 3\nI1027 10:37:18.939691 408 log.go:181] (0xc00026e000) (0xc00043c0a0) Stream removed, broadcasting: 5\n" Oct 27 10:37:18.947: INFO: stdout: "\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt\naffinity-nodeport-transition-k6sjt" Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Received response from host: affinity-nodeport-transition-k6sjt Oct 27 10:37:18.947: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6896, will wait for the garbage collector to delete the pods Oct 27 10:37:19.062: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.228171ms Oct 27 10:37:19.562: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.39155ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:37:28.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6896" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.960 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":36,"skipped":453,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:37:28.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-5257 STEP: creating replication controller nodeport-test in namespace services-5257 I1027 10:37:28.897555 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5257, replica count: 2 I1027 10:37:31.947972 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:37:34.948247 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 10:37:34.948: INFO: Creating new exec pod Oct 27 10:37:39.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5257 execpodql56m -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Oct 27 10:37:40.213: INFO: stderr: "I1027 10:37:40.131518 427 log.go:181] (0xc0001ea0b0) (0xc0008f60a0) Create stream\nI1027 10:37:40.131582 427 log.go:181] (0xc0001ea0b0) (0xc0008f60a0) Stream added, broadcasting: 1\nI1027 10:37:40.133578 427 log.go:181] (0xc0001ea0b0) Reply frame received for 1\nI1027 10:37:40.133612 427 log.go:181] (0xc0001ea0b0) (0xc000b12be0) Create stream\nI1027 10:37:40.133623 427 log.go:181] (0xc0001ea0b0) (0xc000b12be0) Stream added, broadcasting: 3\nI1027 10:37:40.134419 427 log.go:181] (0xc0001ea0b0) Reply frame received for 3\nI1027 10:37:40.134446 427 log.go:181] (0xc0001ea0b0) (0xc000c886e0) Create stream\nI1027 10:37:40.134454 427 log.go:181] (0xc0001ea0b0) (0xc000c886e0) Stream added, broadcasting: 5\nI1027 10:37:40.135228 427 log.go:181] (0xc0001ea0b0) Reply frame received for 5\nI1027 10:37:40.203020 427 log.go:181] (0xc0001ea0b0) Data frame received for 5\nI1027 10:37:40.203045 427 log.go:181] (0xc000c886e0) (5) Data frame handling\nI1027 10:37:40.203062 427 log.go:181] (0xc000c886e0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI1027 10:37:40.204093 427 log.go:181] (0xc0001ea0b0) Data frame received for 5\nI1027 10:37:40.204113 427 log.go:181] (0xc000c886e0) (5) Data frame handling\nI1027 10:37:40.204134 427 log.go:181] (0xc000c886e0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1027 10:37:40.204504 427 log.go:181] (0xc0001ea0b0) Data frame received for 3\nI1027 10:37:40.204542 427 log.go:181] (0xc000b12be0) (3) Data frame handling\nI1027 10:37:40.204571 427 log.go:181] (0xc0001ea0b0) Data frame received for 5\nI1027 10:37:40.204585 427 log.go:181] (0xc000c886e0) (5) Data frame handling\nI1027 10:37:40.206746 427 log.go:181] (0xc0001ea0b0) Data frame received for 1\nI1027 10:37:40.206774 427 log.go:181] (0xc0008f60a0) (1) Data frame handling\nI1027 10:37:40.206788 427 log.go:181] (0xc0008f60a0) (1) Data frame sent\nI1027 10:37:40.206804 427 log.go:181] (0xc0001ea0b0) (0xc0008f60a0) Stream removed, broadcasting: 1\nI1027 10:37:40.206913 427 log.go:181] (0xc0001ea0b0) Go away received\nI1027 10:37:40.207328 427 log.go:181] (0xc0001ea0b0) (0xc0008f60a0) Stream removed, broadcasting: 1\nI1027 10:37:40.207344 427 log.go:181] (0xc0001ea0b0) (0xc000b12be0) Stream removed, broadcasting: 3\nI1027 10:37:40.207352 427 log.go:181] (0xc0001ea0b0) (0xc000c886e0) Stream removed, broadcasting: 5\n" Oct 27 10:37:40.213: INFO: stdout: "" Oct 27 10:37:40.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5257 execpodql56m -- /bin/sh -x -c nc -zv -t -w 2 10.111.125.7 80' Oct 27 10:37:40.423: INFO: stderr: "I1027 10:37:40.355165 444 log.go:181] (0xc0000e8000) (0xc000c2e1e0) Create stream\nI1027 10:37:40.355223 444 log.go:181] (0xc0000e8000) (0xc000c2e1e0) Stream added, broadcasting: 1\nI1027 10:37:40.357172 444 log.go:181] (0xc0000e8000) Reply frame received for 1\nI1027 10:37:40.357243 444 log.go:181] (0xc0000e8000) (0xc000e90000) Create stream\nI1027 10:37:40.357271 444 log.go:181] (0xc0000e8000) (0xc000e90000) Stream added, broadcasting: 3\nI1027 10:37:40.358398 444 log.go:181] (0xc0000e8000) Reply frame received for 3\nI1027 10:37:40.358483 444 log.go:181] (0xc0000e8000) (0xc000e900a0) Create stream\nI1027 10:37:40.358520 444 log.go:181] (0xc0000e8000) (0xc000e900a0) Stream added, broadcasting: 5\nI1027 10:37:40.359328 444 log.go:181] (0xc0000e8000) Reply frame received for 5\nI1027 10:37:40.414027 444 log.go:181] (0xc0000e8000) Data frame received for 3\nI1027 10:37:40.414074 444 log.go:181] (0xc000e90000) (3) Data frame handling\nI1027 10:37:40.414100 444 log.go:181] (0xc0000e8000) Data frame received for 5\nI1027 10:37:40.414119 444 log.go:181] (0xc000e900a0) (5) Data frame handling\nI1027 10:37:40.414135 444 log.go:181] (0xc000e900a0) (5) Data frame sent\nI1027 10:37:40.414158 444 log.go:181] (0xc0000e8000) Data frame received for 5\nI1027 10:37:40.414177 444 log.go:181] (0xc000e900a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.125.7 80\nConnection to 10.111.125.7 80 port [tcp/http] succeeded!\nI1027 10:37:40.415855 444 log.go:181] (0xc0000e8000) Data frame received for 1\nI1027 10:37:40.415873 444 log.go:181] (0xc000c2e1e0) (1) Data frame handling\nI1027 10:37:40.415890 444 log.go:181] (0xc000c2e1e0) (1) Data frame sent\nI1027 10:37:40.416142 444 log.go:181] (0xc0000e8000) (0xc000c2e1e0) Stream removed, broadcasting: 1\nI1027 10:37:40.416236 444 log.go:181] (0xc0000e8000) Go away received\nI1027 10:37:40.416455 444 log.go:181] (0xc0000e8000) (0xc000c2e1e0) Stream removed, broadcasting: 1\nI1027 10:37:40.416469 444 log.go:181] (0xc0000e8000) (0xc000e90000) Stream removed, broadcasting: 3\nI1027 10:37:40.416477 444 log.go:181] (0xc0000e8000) (0xc000e900a0) Stream removed, broadcasting: 5\n" Oct 27 10:37:40.423: INFO: stdout: "" Oct 27 10:37:40.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5257 execpodql56m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30168' Oct 27 10:37:40.621: INFO: stderr: "I1027 10:37:40.553460 463 log.go:181] (0xc00003a0b0) (0xc000c50000) Create stream\nI1027 10:37:40.553540 463 log.go:181] (0xc00003a0b0) (0xc000c50000) Stream added, broadcasting: 1\nI1027 10:37:40.555150 463 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1027 10:37:40.555195 463 log.go:181] (0xc00003a0b0) (0xc00030d180) Create stream\nI1027 10:37:40.555220 463 log.go:181] (0xc00003a0b0) (0xc00030d180) Stream added, broadcasting: 3\nI1027 10:37:40.555931 463 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1027 10:37:40.555981 463 log.go:181] (0xc00003a0b0) (0xc000a0e3c0) Create stream\nI1027 10:37:40.555998 463 log.go:181] (0xc00003a0b0) (0xc000a0e3c0) Stream added, broadcasting: 5\nI1027 10:37:40.556665 463 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1027 10:37:40.611633 463 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 10:37:40.611692 463 log.go:181] (0xc000a0e3c0) (5) Data frame handling\nI1027 10:37:40.611716 463 log.go:181] (0xc000a0e3c0) (5) Data frame sent\nI1027 10:37:40.611735 463 log.go:181] (0xc00003a0b0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 30168\nConnection to 172.18.0.12 30168 port [tcp/30168] succeeded!\nI1027 10:37:40.611756 463 log.go:181] (0xc000a0e3c0) (5) Data frame handling\nI1027 10:37:40.611804 463 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 10:37:40.611822 463 log.go:181] (0xc00030d180) (3) Data frame handling\nI1027 10:37:40.613315 463 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1027 10:37:40.613337 463 log.go:181] (0xc000c50000) (1) Data frame handling\nI1027 10:37:40.613345 463 log.go:181] (0xc000c50000) (1) Data frame sent\nI1027 10:37:40.613354 463 log.go:181] (0xc00003a0b0) (0xc000c50000) Stream removed, broadcasting: 1\nI1027 10:37:40.613444 463 log.go:181] (0xc00003a0b0) Go away received\nI1027 10:37:40.613647 463 log.go:181] (0xc00003a0b0) (0xc000c50000) Stream removed, broadcasting: 1\nI1027 10:37:40.613664 463 log.go:181] (0xc00003a0b0) (0xc00030d180) Stream removed, broadcasting: 3\nI1027 10:37:40.613671 463 log.go:181] (0xc00003a0b0) (0xc000a0e3c0) Stream removed, broadcasting: 5\n" Oct 27 10:37:40.621: INFO: stdout: "" Oct 27 10:37:40.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5257 execpodql56m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30168' Oct 27 10:37:40.841: INFO: stderr: "I1027 10:37:40.759360 481 log.go:181] (0xc00001e000) (0xc0007d21e0) Create stream\nI1027 10:37:40.759431 481 log.go:181] (0xc00001e000) (0xc0007d21e0) Stream added, broadcasting: 1\nI1027 10:37:40.761797 481 log.go:181] (0xc00001e000) Reply frame received for 1\nI1027 10:37:40.761837 481 log.go:181] (0xc00001e000) (0xc000a12000) Create stream\nI1027 10:37:40.761846 481 log.go:181] (0xc00001e000) (0xc000a12000) Stream added, broadcasting: 3\nI1027 10:37:40.762806 481 log.go:181] (0xc00001e000) Reply frame received for 3\nI1027 10:37:40.762853 481 log.go:181] (0xc00001e000) (0xc0007a4280) Create stream\nI1027 10:37:40.762866 481 log.go:181] (0xc00001e000) (0xc0007a4280) Stream added, broadcasting: 5\nI1027 10:37:40.763768 481 log.go:181] (0xc00001e000) Reply frame received for 5\nI1027 10:37:40.832782 481 log.go:181] (0xc00001e000) Data frame received for 5\nI1027 10:37:40.832811 481 log.go:181] (0xc0007a4280) (5) Data frame handling\nI1027 10:37:40.832824 481 log.go:181] (0xc0007a4280) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30168\nConnection to 172.18.0.13 30168 port [tcp/30168] succeeded!\nI1027 10:37:40.833049 481 log.go:181] (0xc00001e000) Data frame received for 3\nI1027 10:37:40.833063 481 log.go:181] (0xc000a12000) (3) Data frame handling\nI1027 10:37:40.833079 481 log.go:181] (0xc00001e000) Data frame received for 5\nI1027 10:37:40.833084 481 log.go:181] (0xc0007a4280) (5) Data frame handling\nI1027 10:37:40.834504 481 log.go:181] (0xc00001e000) Data frame received for 1\nI1027 10:37:40.834532 481 log.go:181] (0xc0007d21e0) (1) Data frame handling\nI1027 10:37:40.834544 481 log.go:181] (0xc0007d21e0) (1) Data frame sent\nI1027 10:37:40.834553 481 log.go:181] (0xc00001e000) (0xc0007d21e0) Stream removed, broadcasting: 1\nI1027 10:37:40.834564 481 log.go:181] (0xc00001e000) Go away received\nI1027 10:37:40.835092 481 log.go:181] (0xc00001e000) (0xc0007d21e0) Stream removed, broadcasting: 1\nI1027 10:37:40.835117 481 log.go:181] (0xc00001e000) (0xc000a12000) Stream removed, broadcasting: 3\nI1027 10:37:40.835127 481 log.go:181] (0xc00001e000) (0xc0007a4280) Stream removed, broadcasting: 5\n" Oct 27 10:37:40.842: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:37:40.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5257" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.075 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":37,"skipped":463,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:37:40.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-566 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-566 STEP: Creating statefulset with conflicting port in namespace statefulset-566 STEP: Waiting until pod test-pod will start running in namespace statefulset-566 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-566 Oct 27 10:37:45.173: INFO: Observed stateful pod in namespace: statefulset-566, name: ss-0, uid: 15441814-5c89-4172-9246-5c6f8b463643, status phase: Pending. Waiting for statefulset controller to delete. Oct 27 10:37:45.611: INFO: Observed stateful pod in namespace: statefulset-566, name: ss-0, uid: 15441814-5c89-4172-9246-5c6f8b463643, status phase: Failed. Waiting for statefulset controller to delete. Oct 27 10:37:45.617: INFO: Observed stateful pod in namespace: statefulset-566, name: ss-0, uid: 15441814-5c89-4172-9246-5c6f8b463643, status phase: Failed. Waiting for statefulset controller to delete. Oct 27 10:37:45.668: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-566 STEP: Removing pod with conflicting port in namespace statefulset-566 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-566 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 10:37:51.957: INFO: Deleting all statefulset in ns statefulset-566 Oct 27 10:37:51.961: INFO: Scaling statefulset ss to 0 Oct 27 10:38:01.975: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 10:38:01.978: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:38:01.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-566" for this suite. • [SLOW TEST:21.152 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":38,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:38:02.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5360 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5360;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5360 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5360;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5360.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5360.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5360.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5360.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5360.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5360.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5360.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.18.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.18.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.18.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.18.194_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5360 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5360;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5360 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5360;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5360.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5360.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5360.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5360.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5360.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5360.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5360.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5360.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5360.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 194.18.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.18.194_udp@PTR;check="$$(dig +tcp +noall +answer +search 194.18.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.18.194_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 10:38:10.286: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.290: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.292: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.295: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.297: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.299: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.304: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.307: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.323: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.325: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.327: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.332: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.337: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.340: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:10.358: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:15.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.369: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.377: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.380: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.382: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.401: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.403: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.406: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.409: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.411: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.414: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.418: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:15.451: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:20.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.365: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.378: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.380: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.398: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.401: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.404: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.409: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.417: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:20.435: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:25.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.367: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.373: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.376: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.405: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.408: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.411: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.414: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.418: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.423: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:25.445: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:30.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.377: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.379: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.382: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.399: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.402: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.404: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.410: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:30.439: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:35.363: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.367: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.371: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.378: INFO: Unable to read wheezy_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.382: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.387: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.416: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.419: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.421: INFO: Unable to read jessie_udp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.424: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360 from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.427: INFO: Unable to read jessie_udp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.432: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.435: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc from pod dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d: the server could not find the requested resource (get pods dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d) Oct 27 10:38:35.454: INFO: Lookups using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5360 wheezy_tcp@dns-test-service.dns-5360 wheezy_udp@dns-test-service.dns-5360.svc wheezy_tcp@dns-test-service.dns-5360.svc wheezy_udp@_http._tcp.dns-test-service.dns-5360.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5360.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5360 jessie_tcp@dns-test-service.dns-5360 jessie_udp@dns-test-service.dns-5360.svc jessie_tcp@dns-test-service.dns-5360.svc jessie_udp@_http._tcp.dns-test-service.dns-5360.svc jessie_tcp@_http._tcp.dns-test-service.dns-5360.svc] Oct 27 10:38:40.467: INFO: DNS probes using dns-5360/dns-test-d1aefe03-e372-4118-bbcc-6f1a732b744d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:38:41.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5360" for this suite. • [SLOW TEST:39.522 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":39,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:38:41.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 27 10:38:46.202: INFO: Successfully updated pod "labelsupdate290e54ad-898d-4af8-b31f-cca7d341fc4f" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:38:48.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9078" for this suite. • [SLOW TEST:6.716 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":567,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:38:48.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 27 10:38:56.561: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:38:56.577: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:38:58.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:38:58.665: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:39:00.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:39:00.582: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:39:02.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:39:02.582: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:39:04.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:39:04.581: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:39:06.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:39:06.592: INFO: Pod pod-with-poststart-http-hook still exists Oct 27 10:39:08.577: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 27 10:39:08.580: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:39:08.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6295" for this suite. • [SLOW TEST:20.346 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":570,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:39:08.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2100 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 27 10:39:08.781: INFO: Found 0 stateful pods, waiting for 3 Oct 27 10:39:18.787: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:18.787: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:18.787: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 27 10:39:28.787: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:28.787: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:28.787: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 27 10:39:28.815: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 27 10:39:38.887: INFO: Updating stateful set ss2 Oct 27 10:39:38.933: INFO: Waiting for Pod statefulset-2100/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 27 10:39:49.562: INFO: Found 2 stateful pods, waiting for 3 Oct 27 10:39:59.566: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:59.566: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:39:59.566: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 27 10:39:59.588: INFO: Updating stateful set ss2 Oct 27 10:39:59.663: INFO: Waiting for Pod statefulset-2100/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 27 10:40:09.689: INFO: Updating stateful set ss2 Oct 27 10:40:09.696: INFO: Waiting for StatefulSet statefulset-2100/ss2 to complete update Oct 27 10:40:09.696: INFO: Waiting for Pod statefulset-2100/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 10:40:19.703: INFO: Deleting all statefulset in ns statefulset-2100 Oct 27 10:40:19.706: INFO: Scaling statefulset ss2 to 0 Oct 27 10:40:39.880: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 10:40:39.883: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:39.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2100" for this suite. • [SLOW TEST:91.377 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":42,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:39.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:40:40.117: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:46.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-397" for this suite. • [SLOW TEST:6.696 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":43,"skipped":621,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:46.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:46.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7029" for this suite. STEP: Destroying namespace "nspatchtest-a83864b7-1bcb-4590-b67c-7cf402635f19-7592" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":44,"skipped":639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:46.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Oct 27 10:40:46.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f -' Oct 27 10:40:47.333: INFO: stderr: "" Oct 27 10:40:47.333: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 27 10:40:47.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config diff -f -' Oct 27 10:40:48.200: INFO: rc: 1 Oct 27 10:40:48.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete -f -' Oct 27 10:40:48.439: INFO: stderr: "" Oct 27 10:40:48.439: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:48.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7799" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":45,"skipped":697,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:48.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 27 10:40:48.724: INFO: Created pod &Pod{ObjectMeta:{dns-484 dns-484 /api/v1/namespaces/dns-484/pods/dns-484 5aa4495a-250b-4882-a1a9-0e31ec003a56 8959724 0 2020-10-27 10:40:48 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-10-27 10:40:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vvndt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vvndt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vvndt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:40:48.766: INFO: The status of Pod dns-484 is Pending, waiting for it to be Running (with Ready = true) Oct 27 10:40:50.771: INFO: The status of Pod dns-484 is Pending, waiting for it to be Running (with Ready = true) Oct 27 10:40:52.769: INFO: The status of Pod dns-484 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 27 10:40:52.769: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-484 PodName:dns-484 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 10:40:52.769: INFO: >>> kubeConfig: /root/.kube/config I1027 10:40:52.814077 7 log.go:181] (0xc001bd40b0) (0xc001581180) Create stream I1027 10:40:52.814122 7 log.go:181] (0xc001bd40b0) (0xc001581180) Stream added, broadcasting: 1 I1027 10:40:52.819226 7 log.go:181] (0xc001bd40b0) Reply frame received for 1 I1027 10:40:52.819282 7 log.go:181] (0xc001bd40b0) (0xc00048d360) Create stream I1027 10:40:52.819299 7 log.go:181] (0xc001bd40b0) (0xc00048d360) Stream added, broadcasting: 3 I1027 10:40:52.820264 7 log.go:181] (0xc001bd40b0) Reply frame received for 3 I1027 10:40:52.820295 7 log.go:181] (0xc001bd40b0) (0xc001163180) Create stream I1027 10:40:52.820306 7 log.go:181] (0xc001bd40b0) (0xc001163180) Stream added, broadcasting: 5 I1027 10:40:52.821035 7 log.go:181] (0xc001bd40b0) Reply frame received for 5 I1027 10:40:52.902953 7 log.go:181] (0xc001bd40b0) Data frame received for 3 I1027 10:40:52.902999 7 log.go:181] (0xc00048d360) (3) Data frame handling I1027 10:40:52.903025 7 log.go:181] (0xc00048d360) (3) Data frame sent I1027 10:40:52.904598 7 log.go:181] (0xc001bd40b0) Data frame received for 3 I1027 10:40:52.904641 7 log.go:181] (0xc00048d360) (3) Data frame handling I1027 10:40:52.904677 7 log.go:181] (0xc001bd40b0) Data frame received for 5 I1027 10:40:52.904707 7 log.go:181] (0xc001163180) (5) Data frame handling I1027 10:40:52.906266 7 log.go:181] (0xc001bd40b0) Data frame received for 1 I1027 10:40:52.906282 7 log.go:181] (0xc001581180) (1) Data frame handling I1027 10:40:52.906297 7 log.go:181] (0xc001581180) (1) Data frame sent I1027 10:40:52.906406 7 log.go:181] (0xc001bd40b0) (0xc001581180) Stream removed, broadcasting: 1 I1027 10:40:52.906446 7 log.go:181] (0xc001bd40b0) Go away received I1027 10:40:52.907024 7 log.go:181] (0xc001bd40b0) (0xc001581180) Stream removed, broadcasting: 1 I1027 10:40:52.907048 7 log.go:181] (0xc001bd40b0) (0xc00048d360) Stream removed, broadcasting: 3 I1027 10:40:52.907062 7 log.go:181] (0xc001bd40b0) (0xc001163180) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Oct 27 10:40:52.907: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-484 PodName:dns-484 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 10:40:52.907: INFO: >>> kubeConfig: /root/.kube/config I1027 10:40:52.938226 7 log.go:181] (0xc001f56420) (0xc0005e3040) Create stream I1027 10:40:52.938258 7 log.go:181] (0xc001f56420) (0xc0005e3040) Stream added, broadcasting: 1 I1027 10:40:52.941482 7 log.go:181] (0xc001f56420) Reply frame received for 1 I1027 10:40:52.941514 7 log.go:181] (0xc001f56420) (0xc001cb4e60) Create stream I1027 10:40:52.941529 7 log.go:181] (0xc001f56420) (0xc001cb4e60) Stream added, broadcasting: 3 I1027 10:40:52.942688 7 log.go:181] (0xc001f56420) Reply frame received for 3 I1027 10:40:52.942722 7 log.go:181] (0xc001f56420) (0xc00048d400) Create stream I1027 10:40:52.942734 7 log.go:181] (0xc001f56420) (0xc00048d400) Stream added, broadcasting: 5 I1027 10:40:52.943648 7 log.go:181] (0xc001f56420) Reply frame received for 5 I1027 10:40:53.018153 7 log.go:181] (0xc001f56420) Data frame received for 3 I1027 10:40:53.018190 7 log.go:181] (0xc001cb4e60) (3) Data frame handling I1027 10:40:53.018211 7 log.go:181] (0xc001cb4e60) (3) Data frame sent I1027 10:40:53.020100 7 log.go:181] (0xc001f56420) Data frame received for 3 I1027 10:40:53.020119 7 log.go:181] (0xc001cb4e60) (3) Data frame handling I1027 10:40:53.020485 7 log.go:181] (0xc001f56420) Data frame received for 5 I1027 10:40:53.020526 7 log.go:181] (0xc00048d400) (5) Data frame handling I1027 10:40:53.022767 7 log.go:181] (0xc001f56420) Data frame received for 1 I1027 10:40:53.022781 7 log.go:181] (0xc0005e3040) (1) Data frame handling I1027 10:40:53.022793 7 log.go:181] (0xc0005e3040) (1) Data frame sent I1027 10:40:53.022801 7 log.go:181] (0xc001f56420) (0xc0005e3040) Stream removed, broadcasting: 1 I1027 10:40:53.022916 7 log.go:181] (0xc001f56420) Go away received I1027 10:40:53.022965 7 log.go:181] (0xc001f56420) (0xc0005e3040) Stream removed, broadcasting: 1 I1027 10:40:53.023020 7 log.go:181] (0xc001f56420) (0xc001cb4e60) Stream removed, broadcasting: 3 I1027 10:40:53.023070 7 log.go:181] (0xc001f56420) (0xc00048d400) Stream removed, broadcasting: 5 Oct 27 10:40:53.023: INFO: Deleting pod dns-484... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:53.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-484" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":46,"skipped":705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:53.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-f67f42bf-7897-452b-87de-19f6fd2241e5 STEP: Creating secret with name secret-projected-all-test-volume-90ec0bbe-9c5f-4592-925b-81e4f0088389 STEP: Creating a pod to test Check all projections for projected volume plugin Oct 27 10:40:53.466: INFO: Waiting up to 5m0s for pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0" in namespace "projected-628" to be "Succeeded or Failed" Oct 27 10:40:53.470: INFO: Pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377023ms Oct 27 10:40:55.484: INFO: Pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017312624s Oct 27 10:40:57.488: INFO: Pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.02183858s Oct 27 10:40:59.493: INFO: Pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026584558s STEP: Saw pod success Oct 27 10:40:59.493: INFO: Pod "projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0" satisfied condition "Succeeded or Failed" Oct 27 10:40:59.496: INFO: Trying to get logs from node kali-worker2 pod projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0 container projected-all-volume-test: STEP: delete the pod Oct 27 10:40:59.525: INFO: Waiting for pod projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0 to disappear Oct 27 10:40:59.535: INFO: Pod projected-volume-d119b90b-40a3-4872-b697-e2e72e7345a0 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:40:59.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-628" for this suite. • [SLOW TEST:6.447 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":47,"skipped":777,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:40:59.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4672.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4672.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4672.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 10:41:05.672: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.676: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.679: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.682: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.692: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.696: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.699: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.702: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:05.709: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:10.715: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.718: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.721: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.724: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.731: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.733: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.735: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.738: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:10.745: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:15.714: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.717: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.721: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.725: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.733: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.736: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.739: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.742: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:15.747: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:20.714: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.718: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.722: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.725: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.735: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.738: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.741: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.743: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:20.750: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:25.748: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.751: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.754: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.757: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.766: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.769: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.772: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.774: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:25.808: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:30.714: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.717: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.719: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.721: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.728: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.731: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.733: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.736: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local from pod dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169: the server could not find the requested resource (get pods dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169) Oct 27 10:41:30.799: INFO: Lookups using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4672.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4672.svc.cluster.local jessie_udp@dns-test-service-2.dns-4672.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4672.svc.cluster.local] Oct 27 10:41:35.772: INFO: DNS probes using dns-4672/dns-test-b5f5e19a-12a2-4165-8f9e-bdd3bea3b169 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:41:35.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4672" for this suite. • [SLOW TEST:36.371 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":48,"skipped":790,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:41:35.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-901 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 27 10:41:36.388: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 27 10:41:36.708: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 10:41:39.044: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 10:41:40.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 10:41:42.712: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:44.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:46.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:48.767: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:50.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:52.742: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:54.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:56.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 10:41:58.713: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 27 10:41:58.720: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 27 10:42:02.807: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostname&protocol=udp&host=10.244.2.35&port=8081&tries=1'] Namespace:pod-network-test-901 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 10:42:02.807: INFO: >>> kubeConfig: /root/.kube/config I1027 10:42:02.841860 7 log.go:181] (0xc003720370) (0xc001f38820) Create stream I1027 10:42:02.841894 7 log.go:181] (0xc003720370) (0xc001f38820) Stream added, broadcasting: 1 I1027 10:42:02.843960 7 log.go:181] (0xc003720370) Reply frame received for 1 I1027 10:42:02.844094 7 log.go:181] (0xc003720370) (0xc00048de00) Create stream I1027 10:42:02.844104 7 log.go:181] (0xc003720370) (0xc00048de00) Stream added, broadcasting: 3 I1027 10:42:02.845107 7 log.go:181] (0xc003720370) Reply frame received for 3 I1027 10:42:02.845132 7 log.go:181] (0xc003720370) (0xc001f388c0) Create stream I1027 10:42:02.845146 7 log.go:181] (0xc003720370) (0xc001f388c0) Stream added, broadcasting: 5 I1027 10:42:02.846222 7 log.go:181] (0xc003720370) Reply frame received for 5 I1027 10:42:02.929971 7 log.go:181] (0xc003720370) Data frame received for 3 I1027 10:42:02.930023 7 log.go:181] (0xc00048de00) (3) Data frame handling I1027 10:42:02.930067 7 log.go:181] (0xc00048de00) (3) Data frame sent I1027 10:42:02.930368 7 log.go:181] (0xc003720370) Data frame received for 5 I1027 10:42:02.930385 7 log.go:181] (0xc001f388c0) (5) Data frame handling I1027 10:42:02.930409 7 log.go:181] (0xc003720370) Data frame received for 3 I1027 10:42:02.930445 7 log.go:181] (0xc00048de00) (3) Data frame handling I1027 10:42:02.931916 7 log.go:181] (0xc003720370) Data frame received for 1 I1027 10:42:02.931930 7 log.go:181] (0xc001f38820) (1) Data frame handling I1027 10:42:02.931947 7 log.go:181] (0xc001f38820) (1) Data frame sent I1027 10:42:02.931970 7 log.go:181] (0xc003720370) (0xc001f38820) Stream removed, broadcasting: 1 I1027 10:42:02.932016 7 log.go:181] (0xc003720370) Go away received I1027 10:42:02.932060 7 log.go:181] (0xc003720370) (0xc001f38820) Stream removed, broadcasting: 1 I1027 10:42:02.932073 7 log.go:181] (0xc003720370) (0xc00048de00) Stream removed, broadcasting: 3 I1027 10:42:02.932079 7 log.go:181] (0xc003720370) (0xc001f388c0) Stream removed, broadcasting: 5 Oct 27 10:42:02.932: INFO: Waiting for responses: map[] Oct 27 10:42:02.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.37:8080/dial?request=hostname&protocol=udp&host=10.244.1.248&port=8081&tries=1'] Namespace:pod-network-test-901 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 10:42:02.935: INFO: >>> kubeConfig: /root/.kube/config I1027 10:42:02.970029 7 log.go:181] (0xc000028370) (0xc001db23c0) Create stream I1027 10:42:02.970058 7 log.go:181] (0xc000028370) (0xc001db23c0) Stream added, broadcasting: 1 I1027 10:42:02.973674 7 log.go:181] (0xc000028370) Reply frame received for 1 I1027 10:42:02.973750 7 log.go:181] (0xc000028370) (0xc001f38960) Create stream I1027 10:42:02.973791 7 log.go:181] (0xc000028370) (0xc001f38960) Stream added, broadcasting: 3 I1027 10:42:02.975419 7 log.go:181] (0xc000028370) Reply frame received for 3 I1027 10:42:02.975445 7 log.go:181] (0xc000028370) (0xc00034e000) Create stream I1027 10:42:02.975453 7 log.go:181] (0xc000028370) (0xc00034e000) Stream added, broadcasting: 5 I1027 10:42:02.976145 7 log.go:181] (0xc000028370) Reply frame received for 5 I1027 10:42:03.048096 7 log.go:181] (0xc000028370) Data frame received for 3 I1027 10:42:03.048126 7 log.go:181] (0xc001f38960) (3) Data frame handling I1027 10:42:03.048143 7 log.go:181] (0xc001f38960) (3) Data frame sent I1027 10:42:03.048382 7 log.go:181] (0xc000028370) Data frame received for 5 I1027 10:42:03.048406 7 log.go:181] (0xc00034e000) (5) Data frame handling I1027 10:42:03.048430 7 log.go:181] (0xc000028370) Data frame received for 3 I1027 10:42:03.048443 7 log.go:181] (0xc001f38960) (3) Data frame handling I1027 10:42:03.049982 7 log.go:181] (0xc000028370) Data frame received for 1 I1027 10:42:03.050005 7 log.go:181] (0xc001db23c0) (1) Data frame handling I1027 10:42:03.050015 7 log.go:181] (0xc001db23c0) (1) Data frame sent I1027 10:42:03.050026 7 log.go:181] (0xc000028370) (0xc001db23c0) Stream removed, broadcasting: 1 I1027 10:42:03.050048 7 log.go:181] (0xc000028370) Go away received I1027 10:42:03.050159 7 log.go:181] (0xc000028370) (0xc001db23c0) Stream removed, broadcasting: 1 I1027 10:42:03.050208 7 log.go:181] (0xc000028370) (0xc001f38960) Stream removed, broadcasting: 3 I1027 10:42:03.050229 7 log.go:181] (0xc000028370) (0xc00034e000) Stream removed, broadcasting: 5 Oct 27 10:42:03.050: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:03.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-901" for this suite. • [SLOW TEST:27.145 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":49,"skipped":804,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:03.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ab2f4f0e-1287-464e-9d94-016d2ee409c9 STEP: Creating a pod to test consume configMaps Oct 27 10:42:03.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df" in namespace "configmap-4227" to be "Succeeded or Failed" Oct 27 10:42:03.167: INFO: Pod "pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df": Phase="Pending", Reason="", readiness=false. Elapsed: 33.890642ms Oct 27 10:42:05.172: INFO: Pod "pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038995073s Oct 27 10:42:07.177: INFO: Pod "pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043656632s STEP: Saw pod success Oct 27 10:42:07.177: INFO: Pod "pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df" satisfied condition "Succeeded or Failed" Oct 27 10:42:07.179: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df container configmap-volume-test: STEP: delete the pod Oct 27 10:42:07.239: INFO: Waiting for pod pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df to disappear Oct 27 10:42:07.299: INFO: Pod pod-configmaps-9a484cf9-891b-4c1e-9d55-9b50fa66f7df no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:07.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4227" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":809,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:07.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 10:42:08.054: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 10:42:10.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392127, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 10:42:12.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392128, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739392127, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 10:42:15.156: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:42:15.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9305-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:16.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4342" for this suite. STEP: Destroying namespace "webhook-4342-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.135 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":51,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:16.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-d04b5dc8-7f2a-4209-ba8a-59b40291657d STEP: Creating a pod to test consume configMaps Oct 27 10:42:16.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d" in namespace "configmap-4761" to be "Succeeded or Failed" Oct 27 10:42:16.536: INFO: Pod "pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.130535ms Oct 27 10:42:18.641: INFO: Pod "pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120936161s Oct 27 10:42:20.645: INFO: Pod "pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124805538s STEP: Saw pod success Oct 27 10:42:20.645: INFO: Pod "pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d" satisfied condition "Succeeded or Failed" Oct 27 10:42:20.647: INFO: Trying to get logs from node kali-worker pod pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d container configmap-volume-test: STEP: delete the pod Oct 27 10:42:20.714: INFO: Waiting for pod pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d to disappear Oct 27 10:42:20.720: INFO: Pod pod-configmaps-e77272a2-0ba1-4b1b-badc-eeb5d04f126d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:20.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4761" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:20.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 27 10:42:20.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960433 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 10:42:20.806: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960434 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 10:42:20.806: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960435 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 27 10:42:30.942: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960516 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 10:42:30.942: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960518 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 10:42:30.942: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7609 /api/v1/namespaces/watch-7609/configmaps/e2e-watch-test-label-changed 9d34aeb6-f9d7-49d7-b905-d7ee01a6b1ad 8960519 0 2020-10-27 10:42:20 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-27 10:42:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:30.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7609" for this suite. • [SLOW TEST:10.264 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":53,"skipped":868,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:30.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 10:42:31.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1" in namespace "downward-api-5996" to be "Succeeded or Failed" Oct 27 10:42:31.352: INFO: Pod "downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1": Phase="Pending", Reason="", readiness=false. Elapsed: 90.294984ms Oct 27 10:42:33.419: INFO: Pod "downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158140187s Oct 27 10:42:35.423: INFO: Pod "downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161499746s STEP: Saw pod success Oct 27 10:42:35.423: INFO: Pod "downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1" satisfied condition "Succeeded or Failed" Oct 27 10:42:35.425: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1 container client-container: STEP: delete the pod Oct 27 10:42:35.471: INFO: Waiting for pod downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1 to disappear Oct 27 10:42:35.481: INFO: Pod downwardapi-volume-e966d9f4-72f5-4dc3-bc75-2eea720207a1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:35.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5996" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:35.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3714 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3714 STEP: creating replication controller externalsvc in namespace services-3714 I1027 10:42:35.776583 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3714, replica count: 2 I1027 10:42:38.827021 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 10:42:41.827259 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 27 10:42:41.935: INFO: Creating new exec pod Oct 27 10:42:45.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3714 execpodgmxqm -- /bin/sh -x -c nslookup clusterip-service.services-3714.svc.cluster.local' Oct 27 10:42:49.823: INFO: stderr: "I1027 10:42:49.704810 553 log.go:181] (0xc0006f2bb0) (0xc000bd4140) Create stream\nI1027 10:42:49.704968 553 log.go:181] (0xc0006f2bb0) (0xc000bd4140) Stream added, broadcasting: 1\nI1027 10:42:49.707097 553 log.go:181] (0xc0006f2bb0) Reply frame received for 1\nI1027 10:42:49.707129 553 log.go:181] (0xc0006f2bb0) (0xc000bd41e0) Create stream\nI1027 10:42:49.707138 553 log.go:181] (0xc0006f2bb0) (0xc000bd41e0) Stream added, broadcasting: 3\nI1027 10:42:49.708087 553 log.go:181] (0xc0006f2bb0) Reply frame received for 3\nI1027 10:42:49.708114 553 log.go:181] (0xc0006f2bb0) (0xc000bd4280) Create stream\nI1027 10:42:49.708122 553 log.go:181] (0xc0006f2bb0) (0xc000bd4280) Stream added, broadcasting: 5\nI1027 10:42:49.709073 553 log.go:181] (0xc0006f2bb0) Reply frame received for 5\nI1027 10:42:49.805301 553 log.go:181] (0xc0006f2bb0) Data frame received for 5\nI1027 10:42:49.805323 553 log.go:181] (0xc000bd4280) (5) Data frame handling\nI1027 10:42:49.805336 553 log.go:181] (0xc000bd4280) (5) Data frame sent\n+ nslookup clusterip-service.services-3714.svc.cluster.local\nI1027 10:42:49.814323 553 log.go:181] (0xc0006f2bb0) Data frame received for 3\nI1027 10:42:49.814340 553 log.go:181] (0xc000bd41e0) (3) Data frame handling\nI1027 10:42:49.814347 553 log.go:181] (0xc000bd41e0) (3) Data frame sent\nI1027 10:42:49.815135 553 log.go:181] (0xc0006f2bb0) Data frame received for 3\nI1027 10:42:49.815149 553 log.go:181] (0xc000bd41e0) (3) Data frame handling\nI1027 10:42:49.815159 553 log.go:181] (0xc000bd41e0) (3) Data frame sent\nI1027 10:42:49.815527 553 log.go:181] (0xc0006f2bb0) Data frame received for 3\nI1027 10:42:49.815546 553 log.go:181] (0xc000bd41e0) (3) Data frame handling\nI1027 10:42:49.815980 553 log.go:181] (0xc0006f2bb0) Data frame received for 5\nI1027 10:42:49.815999 553 log.go:181] (0xc000bd4280) (5) Data frame handling\nI1027 10:42:49.817530 553 log.go:181] (0xc0006f2bb0) Data frame received for 1\nI1027 10:42:49.817551 553 log.go:181] (0xc000bd4140) (1) Data frame handling\nI1027 10:42:49.817564 553 log.go:181] (0xc000bd4140) (1) Data frame sent\nI1027 10:42:49.817582 553 log.go:181] (0xc0006f2bb0) (0xc000bd4140) Stream removed, broadcasting: 1\nI1027 10:42:49.817600 553 log.go:181] (0xc0006f2bb0) Go away received\nI1027 10:42:49.817955 553 log.go:181] (0xc0006f2bb0) (0xc000bd4140) Stream removed, broadcasting: 1\nI1027 10:42:49.817976 553 log.go:181] (0xc0006f2bb0) (0xc000bd41e0) Stream removed, broadcasting: 3\nI1027 10:42:49.817987 553 log.go:181] (0xc0006f2bb0) (0xc000bd4280) Stream removed, broadcasting: 5\n" Oct 27 10:42:49.823: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3714.svc.cluster.local\tcanonical name = externalsvc.services-3714.svc.cluster.local.\nName:\texternalsvc.services-3714.svc.cluster.local\nAddress: 10.101.188.214\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3714, will wait for the garbage collector to delete the pods Oct 27 10:42:49.917: INFO: Deleting ReplicationController externalsvc took: 6.250708ms Oct 27 10:42:50.317: INFO: Terminating ReplicationController externalsvc pods took: 400.130768ms Oct 27 10:42:58.742: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:42:58.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3714" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.249 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":55,"skipped":898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:42:58.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2d64f303-fcae-47d4-b22b-694ce41f57ef STEP: Creating a pod to test consume secrets Oct 27 10:42:58.965: INFO: Waiting up to 5m0s for pod "pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e" in namespace "secrets-7813" to be "Succeeded or Failed" Oct 27 10:42:58.986: INFO: Pod "pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.518452ms Oct 27 10:43:00.990: INFO: Pod "pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024660257s Oct 27 10:43:02.994: INFO: Pod "pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02883967s STEP: Saw pod success Oct 27 10:43:02.994: INFO: Pod "pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e" satisfied condition "Succeeded or Failed" Oct 27 10:43:02.997: INFO: Trying to get logs from node kali-worker pod pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e container secret-volume-test: STEP: delete the pod Oct 27 10:43:03.019: INFO: Waiting for pod pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e to disappear Oct 27 10:43:03.034: INFO: Pod pod-secrets-ea13b835-1090-4070-a18d-a304f7d5225e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:43:03.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7813" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":939,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:43:03.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5885.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5885.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5885.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 26.146.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.146.26_udp@PTR;check="$$(dig +tcp +noall +answer +search 26.146.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.146.26_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5885.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5885.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5885.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5885.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5885.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 26.146.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.146.26_udp@PTR;check="$$(dig +tcp +noall +answer +search 26.146.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.146.26_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 10:43:11.312: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.316: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.318: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.321: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.345: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.348: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.351: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.354: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:11.371: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:16.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.389: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.412: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.418: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:16.439: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:21.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.384: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.388: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.416: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.419: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.422: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.424: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:21.443: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:26.376: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.386: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.404: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.408: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.410: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:26.423: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:31.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.384: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.387: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.406: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.409: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.412: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.415: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:31.433: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:36.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.395: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.412: INFO: Unable to read jessie_udp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.414: INFO: Unable to read jessie_tcp@dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.417: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.420: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local from pod dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61: the server could not find the requested resource (get pods dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61) Oct 27 10:43:36.435: INFO: Lookups using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 failed for: [wheezy_udp@dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@dns-test-service.dns-5885.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_udp@dns-test-service.dns-5885.svc.cluster.local jessie_tcp@dns-test-service.dns-5885.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5885.svc.cluster.local] Oct 27 10:43:41.565: INFO: DNS probes using dns-5885/dns-test-8f45f160-bfe5-4ecb-a4c1-fb35f4afaa61 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:43:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5885" for this suite. • [SLOW TEST:39.500 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":57,"skipped":942,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:43:42.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 27 10:43:47.197: INFO: Successfully updated pod "annotationupdate76c97926-0b73-408f-9361-aa531eafa84b" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:43:51.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1068" for this suite. • [SLOW TEST:8.717 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":956,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:43:51.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc in namespace container-probe-4691 Oct 27 10:43:55.371: INFO: Started pod liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc in namespace container-probe-4691 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 10:43:55.374: INFO: Initial restart count of pod liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is 0 Oct 27 10:44:09.423: INFO: Restart count of pod container-probe-4691/liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is now 1 (14.049178016s elapsed) Oct 27 10:44:29.483: INFO: Restart count of pod container-probe-4691/liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is now 2 (34.109268397s elapsed) Oct 27 10:44:49.532: INFO: Restart count of pod container-probe-4691/liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is now 3 (54.157893067s elapsed) Oct 27 10:45:09.579: INFO: Restart count of pod container-probe-4691/liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is now 4 (1m14.204479042s elapsed) Oct 27 10:46:20.459: INFO: Restart count of pod container-probe-4691/liveness-fbf52e21-8892-47c6-96a2-aa427d90a1dc is now 5 (2m25.08449466s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:46:20.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4691" for this suite. • [SLOW TEST:149.216 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":969,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:46:20.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-22801d45-3a50-4365-9544-3c7cd0309861 STEP: Creating a pod to test consume configMaps Oct 27 10:46:20.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f" in namespace "projected-8224" to be "Succeeded or Failed" Oct 27 10:46:20.725: INFO: Pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.407921ms Oct 27 10:46:22.729: INFO: Pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05706955s Oct 27 10:46:24.968: INFO: Pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295992295s Oct 27 10:46:27.527: INFO: Pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.8553454s STEP: Saw pod success Oct 27 10:46:27.527: INFO: Pod "pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f" satisfied condition "Succeeded or Failed" Oct 27 10:46:27.530: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f container projected-configmap-volume-test: STEP: delete the pod Oct 27 10:46:28.030: INFO: Waiting for pod pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f to disappear Oct 27 10:46:28.215: INFO: Pod pod-projected-configmaps-29bb645b-d35c-4d58-bbc2-263aad73298f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:46:28.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8224" for this suite. • [SLOW TEST:7.746 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:46:28.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:46:28.713: INFO: Create a RollingUpdate DaemonSet Oct 27 10:46:28.730: INFO: Check that daemon pods launch on every node of the cluster Oct 27 10:46:28.737: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:28.767: INFO: Number of nodes with available pods: 0 Oct 27 10:46:28.767: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:46:29.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:29.776: INFO: Number of nodes with available pods: 0 Oct 27 10:46:29.776: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:46:30.971: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:30.975: INFO: Number of nodes with available pods: 0 Oct 27 10:46:30.975: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:46:31.998: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:32.011: INFO: Number of nodes with available pods: 0 Oct 27 10:46:32.011: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:46:32.777: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:32.779: INFO: Number of nodes with available pods: 0 Oct 27 10:46:32.779: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:46:33.773: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:33.776: INFO: Number of nodes with available pods: 2 Oct 27 10:46:33.776: INFO: Number of running nodes: 2, number of available pods: 2 Oct 27 10:46:33.776: INFO: Update the DaemonSet to trigger a rollout Oct 27 10:46:33.780: INFO: Updating DaemonSet daemon-set Oct 27 10:46:48.795: INFO: Roll back the DaemonSet before rollout is complete Oct 27 10:46:48.800: INFO: Updating DaemonSet daemon-set Oct 27 10:46:48.801: INFO: Make sure DaemonSet rollback is complete Oct 27 10:46:48.817: INFO: Wrong image for pod: daemon-set-96vh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 27 10:46:48.817: INFO: Pod daemon-set-96vh8 is not available Oct 27 10:46:48.827: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:49.831: INFO: Wrong image for pod: daemon-set-96vh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 27 10:46:49.831: INFO: Pod daemon-set-96vh8 is not available Oct 27 10:46:49.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:50.832: INFO: Wrong image for pod: daemon-set-96vh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 27 10:46:50.832: INFO: Pod daemon-set-96vh8 is not available Oct 27 10:46:50.837: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:51.832: INFO: Wrong image for pod: daemon-set-96vh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 27 10:46:51.832: INFO: Pod daemon-set-96vh8 is not available Oct 27 10:46:51.835: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:52.831: INFO: Wrong image for pod: daemon-set-96vh8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 27 10:46:52.831: INFO: Pod daemon-set-96vh8 is not available Oct 27 10:46:52.834: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:46:53.830: INFO: Pod daemon-set-x9dbj is not available Oct 27 10:46:53.832: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4649, will wait for the garbage collector to delete the pods Oct 27 10:46:53.894: INFO: Deleting DaemonSet.extensions daemon-set took: 8.141017ms Oct 27 10:46:54.394: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.239913ms Oct 27 10:46:57.498: INFO: Number of nodes with available pods: 0 Oct 27 10:46:57.498: INFO: Number of running nodes: 0, number of available pods: 0 Oct 27 10:46:57.504: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4649/daemonsets","resourceVersion":"8961884"},"items":null} Oct 27 10:46:57.507: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4649/pods","resourceVersion":"8961884"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:46:57.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4649" for this suite. • [SLOW TEST:29.301 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":61,"skipped":1019,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:46:57.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 27 10:46:57.606: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 27 10:46:57.613: INFO: Waiting for terminating namespaces to be deleted... Oct 27 10:46:57.616: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 27 10:46:57.621: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:46:57.621: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:46:57.621: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:46:57.621: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 10:46:57.621: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 27 10:46:57.625: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:46:57.625: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:46:57.625: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:46:57.625: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4ca0d18b-a403-4ef6-8c22-24b1a5c3485e 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4ca0d18b-a403-4ef6-8c22-24b1a5c3485e off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4ca0d18b-a403-4ef6-8c22-24b1a5c3485e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:13.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3141" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.382 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":62,"skipped":1040,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:13.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-cf22ae3d-2e00-4c3b-8fac-1092223d229b STEP: Creating a pod to test consume secrets Oct 27 10:47:14.010: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9" in namespace "projected-8036" to be "Succeeded or Failed" Oct 27 10:47:14.031: INFO: Pod "pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.496462ms Oct 27 10:47:16.036: INFO: Pod "pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025862543s Oct 27 10:47:18.041: INFO: Pod "pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030631966s STEP: Saw pod success Oct 27 10:47:18.041: INFO: Pod "pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9" satisfied condition "Succeeded or Failed" Oct 27 10:47:18.044: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9 container projected-secret-volume-test: STEP: delete the pod Oct 27 10:47:18.083: INFO: Waiting for pod pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9 to disappear Oct 27 10:47:18.090: INFO: Pod pod-projected-secrets-44dd6d52-c912-4e1f-9f9f-14ef2106cca9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:18.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8036" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":1056,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:18.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-7c03efaa-f519-4e8f-9aec-0a1162168cd3 STEP: Creating a pod to test consume secrets Oct 27 10:47:18.226: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29" in namespace "projected-8277" to be "Succeeded or Failed" Oct 27 10:47:18.237: INFO: Pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29": Phase="Pending", Reason="", readiness=false. Elapsed: 11.172264ms Oct 27 10:47:20.240: INFO: Pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014050016s Oct 27 10:47:22.247: INFO: Pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02103396s Oct 27 10:47:24.251: INFO: Pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025010195s STEP: Saw pod success Oct 27 10:47:24.251: INFO: Pod "pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29" satisfied condition "Succeeded or Failed" Oct 27 10:47:24.268: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29 container secret-volume-test: STEP: delete the pod Oct 27 10:47:24.291: INFO: Waiting for pod pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29 to disappear Oct 27 10:47:24.310: INFO: Pod pod-projected-secrets-5694db13-0bd8-4776-ba30-d196bf568a29 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8277" for this suite. • [SLOW TEST:6.220 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1060,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:24.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:47:24.451: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 27 10:47:29.472: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 27 10:47:29.472: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 27 10:47:29.571: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8094 /apis/apps/v1/namespaces/deployment-8094/deployments/test-cleanup-deployment d8528a1a-f3d1-461d-9a1b-d6c13ab100b6 8962151 1 2020-10-27 10:47:29 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-10-27 10:47:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0033d1878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Oct 27 10:47:29.642: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-8094 /apis/apps/v1/namespaces/deployment-8094/replicasets/test-cleanup-deployment-5d446bdd47 2495cbc2-0d34-441e-bf64-d5679445844b 8962160 1 2020-10-27 10:47:29 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment d8528a1a-f3d1-461d-9a1b-d6c13ab100b6 0xc0040dc547 0xc0040dc548}] [] [{kube-controller-manager Update apps/v1 2020-10-27 10:47:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8528a1a-f3d1-461d-9a1b-d6c13ab100b6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040dc5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 10:47:29.642: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Oct 27 10:47:29.642: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8094 /apis/apps/v1/namespaces/deployment-8094/replicasets/test-cleanup-controller 1e0b9ffc-4491-4dd6-9d2c-7c06a56a045c 8962153 1 2020-10-27 10:47:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment d8528a1a-f3d1-461d-9a1b-d6c13ab100b6 0xc0040dc437 0xc0040dc438}] [] [{e2e.test Update apps/v1 2020-10-27 10:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 10:47:29 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"d8528a1a-f3d1-461d-9a1b-d6c13ab100b6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0040dc4d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 27 10:47:29.679: INFO: Pod "test-cleanup-controller-th68r" is available: &Pod{ObjectMeta:{test-cleanup-controller-th68r test-cleanup-controller- deployment-8094 /api/v1/namespaces/deployment-8094/pods/test-cleanup-controller-th68r 3bd9a37b-fbb3-4a7c-a645-0d80e0b976a0 8962129 0 2020-10-27 10:47:24 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 1e0b9ffc-4491-4dd6-9d2c-7c06a56a045c 0xc0040dca77 0xc0040dca78}] [] [{kube-controller-manager Update v1 2020-10-27 10:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e0b9ffc-4491-4dd6-9d2c-7c06a56a045c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 10:47:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mjxg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mjxg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mjxg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:47:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:47:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:47:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.11,StartTime:2020-10-27 10:47:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 10:47:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ab62809614714e2959cb8476b9583d67f1c8032b5ee0f0fe9bd7fe96dab58739,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 27 10:47:29.679: INFO: Pod "test-cleanup-deployment-5d446bdd47-27dps" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-27dps test-cleanup-deployment-5d446bdd47- deployment-8094 /api/v1/namespaces/deployment-8094/pods/test-cleanup-deployment-5d446bdd47-27dps b4e26abc-062f-42fc-82ff-b6aa940967a8 8962158 0 2020-10-27 10:47:29 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 2495cbc2-0d34-441e-bf64-d5679445844b 0xc0040dcc47 0xc0040dcc48}] [] [{kube-controller-manager Update v1 2020-10-27 10:47:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2495cbc2-0d34-441e-bf64-d5679445844b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mjxg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mjxg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mjxg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 10:47:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:29.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8094" for this suite. • [SLOW TEST:5.438 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":65,"skipped":1062,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:29.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 27 10:47:29.852: INFO: Waiting up to 5m0s for pod "pod-d8af097d-5496-4def-89be-b1653a1042b4" in namespace "emptydir-349" to be "Succeeded or Failed" Oct 27 10:47:29.912: INFO: Pod "pod-d8af097d-5496-4def-89be-b1653a1042b4": Phase="Pending", Reason="", readiness=false. Elapsed: 59.74407ms Oct 27 10:47:31.916: INFO: Pod "pod-d8af097d-5496-4def-89be-b1653a1042b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063842921s Oct 27 10:47:34.036: INFO: Pod "pod-d8af097d-5496-4def-89be-b1653a1042b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.183499396s Oct 27 10:47:36.040: INFO: Pod "pod-d8af097d-5496-4def-89be-b1653a1042b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187880325s STEP: Saw pod success Oct 27 10:47:36.040: INFO: Pod "pod-d8af097d-5496-4def-89be-b1653a1042b4" satisfied condition "Succeeded or Failed" Oct 27 10:47:36.043: INFO: Trying to get logs from node kali-worker2 pod pod-d8af097d-5496-4def-89be-b1653a1042b4 container test-container: STEP: delete the pod Oct 27 10:47:36.105: INFO: Waiting for pod pod-d8af097d-5496-4def-89be-b1653a1042b4 to disappear Oct 27 10:47:36.214: INFO: Pod pod-d8af097d-5496-4def-89be-b1653a1042b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:36.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-349" for this suite. • [SLOW TEST:6.466 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":1078,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:36.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 27 10:47:36.353: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:36.357: INFO: Number of nodes with available pods: 0 Oct 27 10:47:36.357: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:37.431: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:37.435: INFO: Number of nodes with available pods: 0 Oct 27 10:47:37.435: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:38.363: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:38.366: INFO: Number of nodes with available pods: 0 Oct 27 10:47:38.366: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:39.402: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:39.405: INFO: Number of nodes with available pods: 0 Oct 27 10:47:39.405: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:40.363: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:40.367: INFO: Number of nodes with available pods: 1 Oct 27 10:47:40.367: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:41.368: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:41.372: INFO: Number of nodes with available pods: 2 Oct 27 10:47:41.372: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 27 10:47:41.497: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:41.506: INFO: Number of nodes with available pods: 1 Oct 27 10:47:41.506: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:42.510: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:42.514: INFO: Number of nodes with available pods: 1 Oct 27 10:47:42.514: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:43.629: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:43.633: INFO: Number of nodes with available pods: 1 Oct 27 10:47:43.633: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:44.511: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:44.515: INFO: Number of nodes with available pods: 1 Oct 27 10:47:44.515: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:45.512: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:45.516: INFO: Number of nodes with available pods: 1 Oct 27 10:47:45.516: INFO: Node kali-worker is running more than one daemon pod Oct 27 10:47:46.513: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 10:47:46.517: INFO: Number of nodes with available pods: 2 Oct 27 10:47:46.517: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-512, will wait for the garbage collector to delete the pods Oct 27 10:47:46.581: INFO: Deleting DaemonSet.extensions daemon-set took: 6.434095ms Oct 27 10:47:47.082: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.329757ms Oct 27 10:47:58.699: INFO: Number of nodes with available pods: 0 Oct 27 10:47:58.699: INFO: Number of running nodes: 0, number of available pods: 0 Oct 27 10:47:58.702: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-512/daemonsets","resourceVersion":"8962360"},"items":null} Oct 27 10:47:58.704: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-512/pods","resourceVersion":"8962360"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:58.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-512" for this suite. • [SLOW TEST:22.496 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":67,"skipped":1090,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:58.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:47:58.767: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:47:59.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6713" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":68,"skipped":1090,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:47:59.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-e77d88e7-e0d1-404c-b3b7-ff5f3bb44c83 STEP: Creating a pod to test consume configMaps Oct 27 10:48:00.022: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f" in namespace "projected-2565" to be "Succeeded or Failed" Oct 27 10:48:00.026: INFO: Pod "pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.896415ms Oct 27 10:48:02.030: INFO: Pod "pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007842693s Oct 27 10:48:04.045: INFO: Pod "pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022981183s STEP: Saw pod success Oct 27 10:48:04.045: INFO: Pod "pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f" satisfied condition "Succeeded or Failed" Oct 27 10:48:04.057: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f container projected-configmap-volume-test: STEP: delete the pod Oct 27 10:48:04.158: INFO: Waiting for pod pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f to disappear Oct 27 10:48:04.164: INFO: Pod pod-projected-configmaps-6a9a2ed0-5eb9-4c64-adea-55898977ab4f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:48:04.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2565" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:48:04.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6738.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6738.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6738.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6738.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6738.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6738.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 10:48:10.536: INFO: DNS probes using dns-6738/dns-test-9c6eb4a8-2f14-49de-b52a-a2e284f04ed5 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:48:11.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6738" for this suite. • [SLOW TEST:7.315 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":70,"skipped":1119,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:48:11.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Oct 27 10:48:11.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-7953 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 27 10:48:11.845: INFO: stderr: "" Oct 27 10:48:11.845: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Oct 27 10:48:11.845: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 27 10:48:11.845: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7953" to be "running and ready, or succeeded" Oct 27 10:48:11.872: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 26.769432ms Oct 27 10:48:13.964: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119049239s Oct 27 10:48:15.968: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.123278075s Oct 27 10:48:15.968: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 27 10:48:15.968: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 27 10:48:15.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953' Oct 27 10:48:16.086: INFO: stderr: "" Oct 27 10:48:16.086: INFO: stdout: "I1027 10:48:15.198016 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/lz2 342\nI1027 10:48:15.398227 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/swh7 579\nI1027 10:48:15.598204 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/wc2 323\nI1027 10:48:15.798249 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/xqgh 553\nI1027 10:48:15.998099 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2blt 230\n" STEP: limiting log lines Oct 27 10:48:16.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953 --tail=1' Oct 27 10:48:16.190: INFO: stderr: "" Oct 27 10:48:16.190: INFO: stdout: "I1027 10:48:15.998099 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2blt 230\n" Oct 27 10:48:16.190: INFO: got output "I1027 10:48:15.998099 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2blt 230\n" STEP: limiting log bytes Oct 27 10:48:16.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953 --limit-bytes=1' Oct 27 10:48:16.299: INFO: stderr: "" Oct 27 10:48:16.299: INFO: stdout: "I" Oct 27 10:48:16.299: INFO: got output "I" STEP: exposing timestamps Oct 27 10:48:16.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953 --tail=1 --timestamps' Oct 27 10:48:16.409: INFO: stderr: "" Oct 27 10:48:16.409: INFO: stdout: "2020-10-27T10:48:16.398333711Z I1027 10:48:16.398181 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/558c 378\n" Oct 27 10:48:16.409: INFO: got output "2020-10-27T10:48:16.398333711Z I1027 10:48:16.398181 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/558c 378\n" STEP: restricting to a time range Oct 27 10:48:18.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953 --since=1s' Oct 27 10:48:19.028: INFO: stderr: "" Oct 27 10:48:19.028: INFO: stdout: "I1027 10:48:18.198170 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/8f7v 417\nI1027 10:48:18.398216 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/6zwf 273\nI1027 10:48:18.598191 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/hbbv 319\nI1027 10:48:18.798179 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/jbl 468\nI1027 10:48:18.998159 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/lksk 421\n" Oct 27 10:48:19.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7953 --since=24h' Oct 27 10:48:19.150: INFO: stderr: "" Oct 27 10:48:19.150: INFO: stdout: "I1027 10:48:15.198016 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/lz2 342\nI1027 10:48:15.398227 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/swh7 579\nI1027 10:48:15.598204 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/wc2 323\nI1027 10:48:15.798249 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/xqgh 553\nI1027 10:48:15.998099 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2blt 230\nI1027 10:48:16.198159 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/8pnb 461\nI1027 10:48:16.398181 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/558c 378\nI1027 10:48:16.598149 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/jkpd 321\nI1027 10:48:16.798123 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/ldbx 200\nI1027 10:48:16.998228 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/kx8 287\nI1027 10:48:17.198205 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/qv2 538\nI1027 10:48:17.398242 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/lz8 307\nI1027 10:48:17.598192 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/nd6 535\nI1027 10:48:17.798166 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/k5p 590\nI1027 10:48:17.998226 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/5vgm 234\nI1027 10:48:18.198170 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/8f7v 417\nI1027 10:48:18.398216 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/6zwf 273\nI1027 10:48:18.598191 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/hbbv 319\nI1027 10:48:18.798179 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/jbl 468\nI1027 10:48:18.998159 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/lksk 421\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Oct 27 10:48:19.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7953' Oct 27 10:48:28.124: INFO: stderr: "" Oct 27 10:48:28.124: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:48:28.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7953" for this suite. • [SLOW TEST:16.626 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":71,"skipped":1132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:48:28.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-84f62286-b528-46f1-b779-6bc941295a1f in namespace container-probe-9922 Oct 27 10:48:32.347: INFO: Started pod busybox-84f62286-b528-46f1-b779-6bc941295a1f in namespace container-probe-9922 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 10:48:32.350: INFO: Initial restart count of pod busybox-84f62286-b528-46f1-b779-6bc941295a1f is 0 Oct 27 10:49:22.535: INFO: Restart count of pod container-probe-9922/busybox-84f62286-b528-46f1-b779-6bc941295a1f is now 1 (50.184582802s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:49:22.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9922" for this suite. • [SLOW TEST:54.456 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:49:22.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Oct 27 10:49:22.672: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix310313453/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:49:22.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7720" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":73,"skipped":1189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:49:22.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 27 10:49:23.681: INFO: starting watch STEP: patching STEP: updating Oct 27 10:49:23.706: INFO: waiting for watch events with expected annotations Oct 27 10:49:23.706: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:49:23.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-404" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":74,"skipped":1236,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:49:23.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Oct 27 10:49:24.560: INFO: created pod pod-service-account-defaultsa Oct 27 10:49:24.560: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 27 10:49:24.569: INFO: created pod pod-service-account-mountsa Oct 27 10:49:24.569: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 27 10:49:24.596: INFO: created pod pod-service-account-nomountsa Oct 27 10:49:24.596: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 27 10:49:24.708: INFO: created pod pod-service-account-defaultsa-mountspec Oct 27 10:49:24.708: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 27 10:49:24.717: INFO: created pod pod-service-account-mountsa-mountspec Oct 27 10:49:24.717: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 27 10:49:25.119: INFO: created pod pod-service-account-nomountsa-mountspec Oct 27 10:49:25.119: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 27 10:49:25.198: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 27 10:49:25.198: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 27 10:49:25.273: INFO: created pod pod-service-account-mountsa-nomountspec Oct 27 10:49:25.273: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 27 10:49:25.313: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 27 10:49:25.313: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:49:25.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8607" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":75,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:49:25.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-j8lz STEP: Creating a pod to test atomic-volume-subpath Oct 27 10:49:26.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-j8lz" in namespace "subpath-3734" to be "Succeeded or Failed" Oct 27 10:49:26.190: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636562ms Oct 27 10:49:28.295: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113816432s Oct 27 10:49:30.816: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634996094s Oct 27 10:49:33.623: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.442062403s Oct 27 10:49:35.667: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486013329s Oct 27 10:49:37.906: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.725005226s Oct 27 10:49:39.978: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 13.796850615s Oct 27 10:49:41.983: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 15.80156443s Oct 27 10:49:43.986: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 17.804957997s Oct 27 10:49:45.991: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 19.809932748s Oct 27 10:49:47.996: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 21.814660885s Oct 27 10:49:49.999: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 23.81811781s Oct 27 10:49:52.004: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 25.823015056s Oct 27 10:49:54.009: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 27.827468465s Oct 27 10:49:56.013: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 29.831319062s Oct 27 10:49:58.017: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Running", Reason="", readiness=true. Elapsed: 31.836011735s Oct 27 10:50:00.022: INFO: Pod "pod-subpath-test-secret-j8lz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.840295047s STEP: Saw pod success Oct 27 10:50:00.022: INFO: Pod "pod-subpath-test-secret-j8lz" satisfied condition "Succeeded or Failed" Oct 27 10:50:00.025: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-j8lz container test-container-subpath-secret-j8lz: STEP: delete the pod Oct 27 10:50:00.077: INFO: Waiting for pod pod-subpath-test-secret-j8lz to disappear Oct 27 10:50:00.085: INFO: Pod pod-subpath-test-secret-j8lz no longer exists STEP: Deleting pod pod-subpath-test-secret-j8lz Oct 27 10:50:00.085: INFO: Deleting pod "pod-subpath-test-secret-j8lz" in namespace "subpath-3734" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:50:00.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3734" for this suite. • [SLOW TEST:34.286 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":76,"skipped":1270,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:50:00.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:50:16.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2155" for this suite. • [SLOW TEST:16.256 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":77,"skipped":1277,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:50:16.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:50:16.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-727" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":78,"skipped":1289,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:50:16.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 27 10:50:16.722: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 27 10:50:16.763: INFO: Waiting for terminating namespaces to be deleted... Oct 27 10:50:16.766: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 27 10:50:16.772: INFO: fail-once-local-2lksc from job-2155 started at 2020-10-27 10:50:07 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.772: INFO: Container c ready: false, restart count 1 Oct 27 10:50:16.772: INFO: fail-once-local-5rx82 from job-2155 started at 2020-10-27 10:50:00 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.772: INFO: Container c ready: false, restart count 1 Oct 27 10:50:16.772: INFO: fail-once-local-ctnvx from job-2155 started at 2020-10-27 10:50:07 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.772: INFO: Container c ready: false, restart count 1 Oct 27 10:50:16.772: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.773: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:50:16.773: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.773: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 10:50:16.773: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 27 10:50:16.779: INFO: fail-once-local-j7zh4 from job-2155 started at 2020-10-27 10:50:00 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.779: INFO: Container c ready: false, restart count 1 Oct 27 10:50:16.779: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.779: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:50:16.779: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:50:16.779: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-92a66e76-8ec6-432e-993b-244b110fc310 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-92a66e76-8ec6-432e-993b-244b110fc310 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-92a66e76-8ec6-432e-993b-244b110fc310 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:55:25.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9965" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.425 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":79,"skipped":1293,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:55:25.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 27 10:55:25.133: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 27 10:55:25.148: INFO: Waiting for terminating namespaces to be deleted... Oct 27 10:55:25.167: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 27 10:55:25.174: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:55:25.174: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:55:25.174: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:55:25.174: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 10:55:25.174: INFO: pod4 from sched-pred-9965 started at 2020-10-27 10:50:21 +0000 UTC (1 container statuses recorded) Oct 27 10:55:25.174: INFO: Container pod4 ready: true, restart count 0 Oct 27 10:55:25.174: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 27 10:55:25.179: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:55:25.179: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 10:55:25.179: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 10:55:25.179: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1641d4158ccb7848], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1641d41592550bc0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:55:32.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9863" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.169 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":80,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:55:32.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 10:55:32.564: INFO: Checking APIGroup: apiregistration.k8s.io Oct 27 10:55:32.565: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 27 10:55:32.565: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.565: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 27 10:55:32.565: INFO: Checking APIGroup: extensions Oct 27 10:55:32.565: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 27 10:55:32.565: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 27 10:55:32.565: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 27 10:55:32.565: INFO: Checking APIGroup: apps Oct 27 10:55:32.566: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 27 10:55:32.566: INFO: Versions found [{apps/v1 v1}] Oct 27 10:55:32.566: INFO: apps/v1 matches apps/v1 Oct 27 10:55:32.566: INFO: Checking APIGroup: events.k8s.io Oct 27 10:55:32.567: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 27 10:55:32.567: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.567: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 27 10:55:32.567: INFO: Checking APIGroup: authentication.k8s.io Oct 27 10:55:32.567: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 27 10:55:32.567: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.567: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 27 10:55:32.567: INFO: Checking APIGroup: authorization.k8s.io Oct 27 10:55:32.569: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 27 10:55:32.569: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.569: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 27 10:55:32.569: INFO: Checking APIGroup: autoscaling Oct 27 10:55:32.570: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 27 10:55:32.570: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 27 10:55:32.570: INFO: autoscaling/v1 matches autoscaling/v1 Oct 27 10:55:32.570: INFO: Checking APIGroup: batch Oct 27 10:55:32.571: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 27 10:55:32.571: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 27 10:55:32.571: INFO: batch/v1 matches batch/v1 Oct 27 10:55:32.571: INFO: Checking APIGroup: certificates.k8s.io Oct 27 10:55:32.571: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 27 10:55:32.571: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.571: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 27 10:55:32.571: INFO: Checking APIGroup: networking.k8s.io Oct 27 10:55:32.572: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 27 10:55:32.572: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.572: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 27 10:55:32.572: INFO: Checking APIGroup: policy Oct 27 10:55:32.573: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Oct 27 10:55:32.573: INFO: Versions found [{policy/v1beta1 v1beta1}] Oct 27 10:55:32.573: INFO: policy/v1beta1 matches policy/v1beta1 Oct 27 10:55:32.573: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 27 10:55:32.574: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 27 10:55:32.574: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.574: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 27 10:55:32.574: INFO: Checking APIGroup: storage.k8s.io Oct 27 10:55:32.574: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 27 10:55:32.574: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.574: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 27 10:55:32.574: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 27 10:55:32.575: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 27 10:55:32.575: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.575: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 27 10:55:32.575: INFO: Checking APIGroup: apiextensions.k8s.io Oct 27 10:55:32.576: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 27 10:55:32.576: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.576: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 27 10:55:32.576: INFO: Checking APIGroup: scheduling.k8s.io Oct 27 10:55:32.577: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 27 10:55:32.577: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.577: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 27 10:55:32.577: INFO: Checking APIGroup: coordination.k8s.io Oct 27 10:55:32.578: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 27 10:55:32.578: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.578: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 27 10:55:32.578: INFO: Checking APIGroup: node.k8s.io Oct 27 10:55:32.579: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Oct 27 10:55:32.579: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.579: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Oct 27 10:55:32.579: INFO: Checking APIGroup: discovery.k8s.io Oct 27 10:55:32.579: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Oct 27 10:55:32.579: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Oct 27 10:55:32.579: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:55:32.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-9826" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":81,"skipped":1338,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:55:32.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-549, will wait for the garbage collector to delete the pods Oct 27 10:55:36.724: INFO: Deleting Job.batch foo took: 6.503252ms Oct 27 10:55:38.024: INFO: Terminating Job.batch foo pods took: 1.300212716s STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:56:18.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-549" for this suite. • [SLOW TEST:46.150 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":82,"skipped":1355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:56:18.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 27 10:56:18.900: INFO: starting watch STEP: patching STEP: updating Oct 27 10:56:18.910: INFO: waiting for watch events with expected annotations Oct 27 10:56:18.910: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:56:18.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6250" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":83,"skipped":1407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:56:18.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 27 10:56:19.056: INFO: Waiting up to 1m0s for all nodes to be ready Oct 27 10:57:19.081: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 27 10:57:19.107: INFO: Created pod: pod0-sched-preemption-low-priority Oct 27 10:57:19.145: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:57:43.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2547" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.334 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":84,"skipped":1443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:57:43.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:57:43.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2704" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":85,"skipped":1482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:57:43.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a966c19d-2a0f-402e-99cc-d6ccc06eef7a STEP: Creating a pod to test consume secrets Oct 27 10:57:43.607: INFO: Waiting up to 5m0s for pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699" in namespace "secrets-5635" to be "Succeeded or Failed" Oct 27 10:57:43.618: INFO: Pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699": Phase="Pending", Reason="", readiness=false. Elapsed: 10.602731ms Oct 27 10:57:45.621: INFO: Pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014015645s Oct 27 10:57:47.625: INFO: Pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017876047s Oct 27 10:57:49.628: INFO: Pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020765854s STEP: Saw pod success Oct 27 10:57:49.628: INFO: Pod "pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699" satisfied condition "Succeeded or Failed" Oct 27 10:57:49.631: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699 container secret-volume-test: STEP: delete the pod Oct 27 10:57:49.747: INFO: Waiting for pod pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699 to disappear Oct 27 10:57:49.755: INFO: Pod pod-secrets-4dcf12c0-af2d-4468-a12c-6e0a7c161699 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:57:49.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5635" for this suite. • [SLOW TEST:6.318 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1546,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:57:49.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 10:57:50.625: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 10:57:52.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393070, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393070, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393070, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393070, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 10:57:55.676: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 27 10:57:55.694: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:57:55.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4295" for this suite. STEP: Destroying namespace "webhook-4295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.056 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":87,"skipped":1568,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:57:55.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 27 10:58:00.439: INFO: Successfully updated pod "pod-update-activedeadlineseconds-15c27b66-0152-4ec4-9854-49b54f0fb12b" Oct 27 10:58:00.439: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-15c27b66-0152-4ec4-9854-49b54f0fb12b" in namespace "pods-683" to be "terminated due to deadline exceeded" Oct 27 10:58:00.455: INFO: Pod "pod-update-activedeadlineseconds-15c27b66-0152-4ec4-9854-49b54f0fb12b": Phase="Running", Reason="", readiness=true. Elapsed: 15.653197ms Oct 27 10:58:02.461: INFO: Pod "pod-update-activedeadlineseconds-15c27b66-0152-4ec4-9854-49b54f0fb12b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021135892s Oct 27 10:58:02.461: INFO: Pod "pod-update-activedeadlineseconds-15c27b66-0152-4ec4-9854-49b54f0fb12b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:02.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-683" for this suite. • [SLOW TEST:6.648 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:02.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1724.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1724.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1724.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1724.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1724.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1724.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 10:58:08.575: INFO: DNS probes using dns-1724/dns-test-d77dbe91-e8b5-4c40-b4e6-f4478de191eb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:08.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1724" for this suite. • [SLOW TEST:6.165 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":89,"skipped":1614,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:08.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Oct 27 10:58:08.745: INFO: created test-event-1 Oct 27 10:58:08.751: INFO: created test-event-2 Oct 27 10:58:08.757: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 27 10:58:08.763: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 27 10:58:09.220: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:09.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2418" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":90,"skipped":1623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:09.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:16.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9533" for this suite. • [SLOW TEST:7.199 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":91,"skipped":1670,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:16.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 27 10:58:16.548: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:33.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2763" for this suite. • [SLOW TEST:16.949 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":92,"skipped":1690,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:33.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:33.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9670" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":93,"skipped":1704,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:33.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Oct 27 10:58:33.647: INFO: Waiting up to 5m0s for pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce" in namespace "var-expansion-3827" to be "Succeeded or Failed" Oct 27 10:58:33.656: INFO: Pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54332ms Oct 27 10:58:35.674: INFO: Pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026743982s Oct 27 10:58:37.679: INFO: Pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.031752474s Oct 27 10:58:39.683: INFO: Pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035413038s STEP: Saw pod success Oct 27 10:58:39.683: INFO: Pod "var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce" satisfied condition "Succeeded or Failed" Oct 27 10:58:39.685: INFO: Trying to get logs from node kali-worker2 pod var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce container dapi-container: STEP: delete the pod Oct 27 10:58:39.712: INFO: Waiting for pod var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce to disappear Oct 27 10:58:39.722: INFO: Pod var-expansion-fd2cc3ec-087e-42c4-8c24-f5a40d4d55ce no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:39.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3827" for this suite. • [SLOW TEST:6.163 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1709,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:39.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-0bbff05e-4e99-4a05-926d-c5013722816b STEP: Creating a pod to test consume secrets Oct 27 10:58:39.844: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14" in namespace "projected-8485" to be "Succeeded or Failed" Oct 27 10:58:39.888: INFO: Pod "pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 44.230595ms Oct 27 10:58:41.892: INFO: Pod "pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048151313s Oct 27 10:58:43.926: INFO: Pod "pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081658407s STEP: Saw pod success Oct 27 10:58:43.926: INFO: Pod "pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14" satisfied condition "Succeeded or Failed" Oct 27 10:58:43.928: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14 container projected-secret-volume-test: STEP: delete the pod Oct 27 10:58:44.117: INFO: Waiting for pod pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14 to disappear Oct 27 10:58:44.160: INFO: Pod pod-projected-secrets-d48f1f7a-e4e4-4734-9100-816d985f2f14 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:44.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8485" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1713,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:44.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 27 10:58:48.735: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 10:58:48.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3690" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1713,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 10:58:48.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8331 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 27 10:58:48.910: INFO: Found 0 stateful pods, waiting for 3 Oct 27 10:58:58.916: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:58:58.916: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:58:58.916: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 27 10:59:08.915: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:59:08.915: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:59:08.915: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 27 10:59:08.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8331 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 10:59:12.142: INFO: stderr: "I1027 10:59:11.999668 734 log.go:181] (0xc000e50c60) (0xc000c18500) Create stream\nI1027 10:59:11.999745 734 log.go:181] (0xc000e50c60) (0xc000c18500) Stream added, broadcasting: 1\nI1027 10:59:12.002108 734 log.go:181] (0xc000e50c60) Reply frame received for 1\nI1027 10:59:12.002165 734 log.go:181] (0xc000e50c60) (0xc000c185a0) Create stream\nI1027 10:59:12.002195 734 log.go:181] (0xc000e50c60) (0xc000c185a0) Stream added, broadcasting: 3\nI1027 10:59:12.003232 734 log.go:181] (0xc000e50c60) Reply frame received for 3\nI1027 10:59:12.003285 734 log.go:181] (0xc000e50c60) (0xc000bb20a0) Create stream\nI1027 10:59:12.003301 734 log.go:181] (0xc000e50c60) (0xc000bb20a0) Stream added, broadcasting: 5\nI1027 10:59:12.004111 734 log.go:181] (0xc000e50c60) Reply frame received for 5\nI1027 10:59:12.100571 734 log.go:181] (0xc000e50c60) Data frame received for 5\nI1027 10:59:12.100595 734 log.go:181] (0xc000bb20a0) (5) Data frame handling\nI1027 10:59:12.100607 734 log.go:181] (0xc000bb20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 10:59:12.132787 734 log.go:181] (0xc000e50c60) Data frame received for 3\nI1027 10:59:12.132820 734 log.go:181] (0xc000c185a0) (3) Data frame handling\nI1027 10:59:12.132892 734 log.go:181] (0xc000c185a0) (3) Data frame sent\nI1027 10:59:12.132917 734 log.go:181] (0xc000e50c60) Data frame received for 3\nI1027 10:59:12.132926 734 log.go:181] (0xc000c185a0) (3) Data frame handling\nI1027 10:59:12.133081 734 log.go:181] (0xc000e50c60) Data frame received for 5\nI1027 10:59:12.133104 734 log.go:181] (0xc000bb20a0) (5) Data frame handling\nI1027 10:59:12.135270 734 log.go:181] (0xc000e50c60) Data frame received for 1\nI1027 10:59:12.135297 734 log.go:181] (0xc000c18500) (1) Data frame handling\nI1027 10:59:12.135309 734 log.go:181] (0xc000c18500) (1) Data frame sent\nI1027 10:59:12.135321 734 log.go:181] (0xc000e50c60) (0xc000c18500) Stream removed, broadcasting: 1\nI1027 10:59:12.135360 734 log.go:181] (0xc000e50c60) Go away received\nI1027 10:59:12.135810 734 log.go:181] (0xc000e50c60) (0xc000c18500) Stream removed, broadcasting: 1\nI1027 10:59:12.135830 734 log.go:181] (0xc000e50c60) (0xc000c185a0) Stream removed, broadcasting: 3\nI1027 10:59:12.135841 734 log.go:181] (0xc000e50c60) (0xc000bb20a0) Stream removed, broadcasting: 5\n" Oct 27 10:59:12.142: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 10:59:12.142: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 27 10:59:22.201: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 27 10:59:32.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8331 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 10:59:32.528: INFO: stderr: "I1027 10:59:32.416675 753 log.go:181] (0xc000161340) (0xc00015c6e0) Create stream\nI1027 10:59:32.416750 753 log.go:181] (0xc000161340) (0xc00015c6e0) Stream added, broadcasting: 1\nI1027 10:59:32.424684 753 log.go:181] (0xc000161340) Reply frame received for 1\nI1027 10:59:32.424729 753 log.go:181] (0xc000161340) (0xc00015c000) Create stream\nI1027 10:59:32.424743 753 log.go:181] (0xc000161340) (0xc00015c000) Stream added, broadcasting: 3\nI1027 10:59:32.426525 753 log.go:181] (0xc000161340) Reply frame received for 3\nI1027 10:59:32.426554 753 log.go:181] (0xc000161340) (0xc000b88000) Create stream\nI1027 10:59:32.426562 753 log.go:181] (0xc000161340) (0xc000b88000) Stream added, broadcasting: 5\nI1027 10:59:32.428901 753 log.go:181] (0xc000161340) Reply frame received for 5\nI1027 10:59:32.520649 753 log.go:181] (0xc000161340) Data frame received for 5\nI1027 10:59:32.520691 753 log.go:181] (0xc000b88000) (5) Data frame handling\nI1027 10:59:32.520702 753 log.go:181] (0xc000b88000) (5) Data frame sent\nI1027 10:59:32.520709 753 log.go:181] (0xc000161340) Data frame received for 5\nI1027 10:59:32.520714 753 log.go:181] (0xc000b88000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 10:59:32.520736 753 log.go:181] (0xc000161340) Data frame received for 3\nI1027 10:59:32.520742 753 log.go:181] (0xc00015c000) (3) Data frame handling\nI1027 10:59:32.520747 753 log.go:181] (0xc00015c000) (3) Data frame sent\nI1027 10:59:32.520755 753 log.go:181] (0xc000161340) Data frame received for 3\nI1027 10:59:32.520766 753 log.go:181] (0xc00015c000) (3) Data frame handling\nI1027 10:59:32.522308 753 log.go:181] (0xc000161340) Data frame received for 1\nI1027 10:59:32.522331 753 log.go:181] (0xc00015c6e0) (1) Data frame handling\nI1027 10:59:32.522346 753 log.go:181] (0xc00015c6e0) (1) Data frame sent\nI1027 10:59:32.522365 753 log.go:181] (0xc000161340) (0xc00015c6e0) Stream removed, broadcasting: 1\nI1027 10:59:32.522391 753 log.go:181] (0xc000161340) Go away received\nI1027 10:59:32.522899 753 log.go:181] (0xc000161340) (0xc00015c6e0) Stream removed, broadcasting: 1\nI1027 10:59:32.522927 753 log.go:181] (0xc000161340) (0xc00015c000) Stream removed, broadcasting: 3\nI1027 10:59:32.522937 753 log.go:181] (0xc000161340) (0xc000b88000) Stream removed, broadcasting: 5\n" Oct 27 10:59:32.529: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 10:59:32.529: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 10:59:42.548: INFO: Waiting for StatefulSet statefulset-8331/ss2 to complete update Oct 27 10:59:42.548: INFO: Waiting for Pod statefulset-8331/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 27 10:59:42.548: INFO: Waiting for Pod statefulset-8331/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 27 10:59:52.557: INFO: Waiting for StatefulSet statefulset-8331/ss2 to complete update STEP: Rolling back to a previous revision Oct 27 11:00:02.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8331 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:00:02.828: INFO: stderr: "I1027 11:00:02.684691 771 log.go:181] (0xc00003b340) (0xc000d06820) Create stream\nI1027 11:00:02.684745 771 log.go:181] (0xc00003b340) (0xc000d06820) Stream added, broadcasting: 1\nI1027 11:00:02.686472 771 log.go:181] (0xc00003b340) Reply frame received for 1\nI1027 11:00:02.686506 771 log.go:181] (0xc00003b340) (0xc0008d4000) Create stream\nI1027 11:00:02.686516 771 log.go:181] (0xc00003b340) (0xc0008d4000) Stream added, broadcasting: 3\nI1027 11:00:02.687058 771 log.go:181] (0xc00003b340) Reply frame received for 3\nI1027 11:00:02.687081 771 log.go:181] (0xc00003b340) (0xc000c4c280) Create stream\nI1027 11:00:02.687096 771 log.go:181] (0xc00003b340) (0xc000c4c280) Stream added, broadcasting: 5\nI1027 11:00:02.687681 771 log.go:181] (0xc00003b340) Reply frame received for 5\nI1027 11:00:02.769194 771 log.go:181] (0xc00003b340) Data frame received for 5\nI1027 11:00:02.769222 771 log.go:181] (0xc000c4c280) (5) Data frame handling\nI1027 11:00:02.769236 771 log.go:181] (0xc000c4c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:00:02.819435 771 log.go:181] (0xc00003b340) Data frame received for 3\nI1027 11:00:02.819458 771 log.go:181] (0xc0008d4000) (3) Data frame handling\nI1027 11:00:02.819465 771 log.go:181] (0xc0008d4000) (3) Data frame sent\nI1027 11:00:02.819723 771 log.go:181] (0xc00003b340) Data frame received for 3\nI1027 11:00:02.819750 771 log.go:181] (0xc0008d4000) (3) Data frame handling\nI1027 11:00:02.820086 771 log.go:181] (0xc00003b340) Data frame received for 5\nI1027 11:00:02.820115 771 log.go:181] (0xc000c4c280) (5) Data frame handling\nI1027 11:00:02.822130 771 log.go:181] (0xc00003b340) Data frame received for 1\nI1027 11:00:02.822159 771 log.go:181] (0xc000d06820) (1) Data frame handling\nI1027 11:00:02.822184 771 log.go:181] (0xc000d06820) (1) Data frame sent\nI1027 11:00:02.822381 771 log.go:181] (0xc00003b340) (0xc000d06820) Stream removed, broadcasting: 1\nI1027 11:00:02.822425 771 log.go:181] (0xc00003b340) Go away received\nI1027 11:00:02.822810 771 log.go:181] (0xc00003b340) (0xc000d06820) Stream removed, broadcasting: 1\nI1027 11:00:02.822822 771 log.go:181] (0xc00003b340) (0xc0008d4000) Stream removed, broadcasting: 3\nI1027 11:00:02.822827 771 log.go:181] (0xc00003b340) (0xc000c4c280) Stream removed, broadcasting: 5\n" Oct 27 11:00:02.828: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:00:02.828: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:00:12.861: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 27 11:00:22.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8331 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:00:23.172: INFO: stderr: "I1027 11:00:23.084808 787 log.go:181] (0xc0009233f0) (0xc000916a00) Create stream\nI1027 11:00:23.084957 787 log.go:181] (0xc0009233f0) (0xc000916a00) Stream added, broadcasting: 1\nI1027 11:00:23.089588 787 log.go:181] (0xc0009233f0) Reply frame received for 1\nI1027 11:00:23.089628 787 log.go:181] (0xc0009233f0) (0xc000916000) Create stream\nI1027 11:00:23.089640 787 log.go:181] (0xc0009233f0) (0xc000916000) Stream added, broadcasting: 3\nI1027 11:00:23.090700 787 log.go:181] (0xc0009233f0) Reply frame received for 3\nI1027 11:00:23.090731 787 log.go:181] (0xc0009233f0) (0xc00015f860) Create stream\nI1027 11:00:23.090747 787 log.go:181] (0xc0009233f0) (0xc00015f860) Stream added, broadcasting: 5\nI1027 11:00:23.091807 787 log.go:181] (0xc0009233f0) Reply frame received for 5\nI1027 11:00:23.162853 787 log.go:181] (0xc0009233f0) Data frame received for 3\nI1027 11:00:23.162887 787 log.go:181] (0xc000916000) (3) Data frame handling\nI1027 11:00:23.162896 787 log.go:181] (0xc000916000) (3) Data frame sent\nI1027 11:00:23.162901 787 log.go:181] (0xc0009233f0) Data frame received for 3\nI1027 11:00:23.162906 787 log.go:181] (0xc000916000) (3) Data frame handling\nI1027 11:00:23.162926 787 log.go:181] (0xc0009233f0) Data frame received for 5\nI1027 11:00:23.162933 787 log.go:181] (0xc00015f860) (5) Data frame handling\nI1027 11:00:23.162939 787 log.go:181] (0xc00015f860) (5) Data frame sent\nI1027 11:00:23.162943 787 log.go:181] (0xc0009233f0) Data frame received for 5\nI1027 11:00:23.162949 787 log.go:181] (0xc00015f860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:00:23.164195 787 log.go:181] (0xc0009233f0) Data frame received for 1\nI1027 11:00:23.164214 787 log.go:181] (0xc000916a00) (1) Data frame handling\nI1027 11:00:23.164221 787 log.go:181] (0xc000916a00) (1) Data frame sent\nI1027 11:00:23.164232 787 log.go:181] (0xc0009233f0) (0xc000916a00) Stream removed, broadcasting: 1\nI1027 11:00:23.164244 787 log.go:181] (0xc0009233f0) Go away received\nI1027 11:00:23.164592 787 log.go:181] (0xc0009233f0) (0xc000916a00) Stream removed, broadcasting: 1\nI1027 11:00:23.164610 787 log.go:181] (0xc0009233f0) (0xc000916000) Stream removed, broadcasting: 3\nI1027 11:00:23.164617 787 log.go:181] (0xc0009233f0) (0xc00015f860) Stream removed, broadcasting: 5\n" Oct 27 11:00:23.173: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:00:23.173: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:00:33.194: INFO: Waiting for StatefulSet statefulset-8331/ss2 to complete update Oct 27 11:00:33.194: INFO: Waiting for Pod statefulset-8331/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 27 11:00:33.194: INFO: Waiting for Pod statefulset-8331/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 27 11:00:33.194: INFO: Waiting for Pod statefulset-8331/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 27 11:00:43.202: INFO: Waiting for StatefulSet statefulset-8331/ss2 to complete update Oct 27 11:00:43.202: INFO: Waiting for Pod statefulset-8331/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 27 11:00:43.202: INFO: Waiting for Pod statefulset-8331/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 27 11:00:53.201: INFO: Waiting for StatefulSet statefulset-8331/ss2 to complete update Oct 27 11:00:53.201: INFO: Waiting for Pod statefulset-8331/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 11:01:03.206: INFO: Deleting all statefulset in ns statefulset-8331 Oct 27 11:01:03.208: INFO: Scaling statefulset ss2 to 0 Oct 27 11:01:23.222: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:01:23.225: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:01:23.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8331" for this suite. • [SLOW TEST:154.418 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":97,"skipped":1725,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:01:23.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 27 11:01:23.370: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:23.374: INFO: Number of nodes with available pods: 0 Oct 27 11:01:23.374: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:01:24.379: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:24.383: INFO: Number of nodes with available pods: 0 Oct 27 11:01:24.383: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:01:25.380: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:25.384: INFO: Number of nodes with available pods: 0 Oct 27 11:01:25.384: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:01:26.379: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:26.383: INFO: Number of nodes with available pods: 0 Oct 27 11:01:26.383: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:01:27.379: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:27.381: INFO: Number of nodes with available pods: 1 Oct 27 11:01:27.381: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:01:28.407: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:28.416: INFO: Number of nodes with available pods: 2 Oct 27 11:01:28.416: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 27 11:01:28.456: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:28.464: INFO: Number of nodes with available pods: 1 Oct 27 11:01:28.464: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:29.477: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:29.480: INFO: Number of nodes with available pods: 1 Oct 27 11:01:29.480: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:30.469: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:30.473: INFO: Number of nodes with available pods: 1 Oct 27 11:01:30.473: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:31.471: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:31.475: INFO: Number of nodes with available pods: 1 Oct 27 11:01:31.475: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:32.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:32.474: INFO: Number of nodes with available pods: 1 Oct 27 11:01:32.474: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:33.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:33.474: INFO: Number of nodes with available pods: 1 Oct 27 11:01:33.474: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:34.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:34.475: INFO: Number of nodes with available pods: 1 Oct 27 11:01:34.475: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:35.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:35.475: INFO: Number of nodes with available pods: 1 Oct 27 11:01:35.475: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:36.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:36.474: INFO: Number of nodes with available pods: 1 Oct 27 11:01:36.474: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:37.471: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:37.475: INFO: Number of nodes with available pods: 1 Oct 27 11:01:37.475: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:38.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:38.540: INFO: Number of nodes with available pods: 1 Oct 27 11:01:38.540: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:39.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:39.474: INFO: Number of nodes with available pods: 1 Oct 27 11:01:39.474: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:40.678: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:40.682: INFO: Number of nodes with available pods: 1 Oct 27 11:01:40.682: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:41.470: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:41.473: INFO: Number of nodes with available pods: 1 Oct 27 11:01:41.473: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:01:42.655: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:01:42.659: INFO: Number of nodes with available pods: 2 Oct 27 11:01:42.659: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4281, will wait for the garbage collector to delete the pods Oct 27 11:01:42.741: INFO: Deleting DaemonSet.extensions daemon-set took: 4.813836ms Oct 27 11:01:43.242: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.25069ms Oct 27 11:01:58.185: INFO: Number of nodes with available pods: 0 Oct 27 11:01:58.185: INFO: Number of running nodes: 0, number of available pods: 0 Oct 27 11:01:58.188: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4281/daemonsets","resourceVersion":"8966286"},"items":null} Oct 27 11:01:58.190: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4281/pods","resourceVersion":"8966286"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:01:58.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4281" for this suite. • [SLOW TEST:34.957 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":98,"skipped":1734,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:01:58.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 27 11:01:58.257: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:02:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1324" for this suite. • [SLOW TEST:16.412 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":99,"skipped":1748,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:02:14.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:02:15.442: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:02:17.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393335, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393335, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393335, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393335, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:02:20.897: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:02:33.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6514" for this suite. STEP: Destroying namespace "webhook-6514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.592 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":100,"skipped":1754,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:02:33.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1027 11:03:13.616740 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:04:15.632: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 27 11:04:15.632: INFO: Deleting pod "simpletest.rc-9j6pc" in namespace "gc-2358" Oct 27 11:04:15.680: INFO: Deleting pod "simpletest.rc-bm6db" in namespace "gc-2358" Oct 27 11:04:15.738: INFO: Deleting pod "simpletest.rc-c64x4" in namespace "gc-2358" Oct 27 11:04:15.782: INFO: Deleting pod "simpletest.rc-jdppw" in namespace "gc-2358" Oct 27 11:04:15.967: INFO: Deleting pod "simpletest.rc-l5szd" in namespace "gc-2358" Oct 27 11:04:16.268: INFO: Deleting pod "simpletest.rc-m7td8" in namespace "gc-2358" Oct 27 11:04:16.374: INFO: Deleting pod "simpletest.rc-r4kbt" in namespace "gc-2358" Oct 27 11:04:16.738: INFO: Deleting pod "simpletest.rc-thvsh" in namespace "gc-2358" Oct 27 11:04:16.799: INFO: Deleting pod "simpletest.rc-x74rl" in namespace "gc-2358" Oct 27 11:04:17.366: INFO: Deleting pod "simpletest.rc-z9p52" in namespace "gc-2358" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:04:17.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2358" for this suite. • [SLOW TEST:104.443 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":101,"skipped":1760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:04:17.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:06:18.366: INFO: Deleting pod "var-expansion-111a9f3a-a96c-4c4f-9a86-4feb1a483c3c" in namespace "var-expansion-2093" Oct 27 11:06:18.389: INFO: Wait up to 5m0s for pod "var-expansion-111a9f3a-a96c-4c4f-9a86-4feb1a483c3c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:06:20.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2093" for this suite. • [SLOW TEST:122.783 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":102,"skipped":1802,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:06:20.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:06:36.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-760" for this suite. • [SLOW TEST:16.173 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":103,"skipped":1821,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:06:36.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:06:36.758: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 27 11:06:36.824: INFO: Number of nodes with available pods: 0 Oct 27 11:06:36.824: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 27 11:06:36.914: INFO: Number of nodes with available pods: 0 Oct 27 11:06:36.914: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:37.919: INFO: Number of nodes with available pods: 0 Oct 27 11:06:37.919: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:38.957: INFO: Number of nodes with available pods: 0 Oct 27 11:06:38.957: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:39.920: INFO: Number of nodes with available pods: 0 Oct 27 11:06:39.920: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:40.919: INFO: Number of nodes with available pods: 0 Oct 27 11:06:40.919: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:41.918: INFO: Number of nodes with available pods: 1 Oct 27 11:06:41.918: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 27 11:06:41.964: INFO: Number of nodes with available pods: 1 Oct 27 11:06:41.964: INFO: Number of running nodes: 0, number of available pods: 1 Oct 27 11:06:42.968: INFO: Number of nodes with available pods: 0 Oct 27 11:06:42.968: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 27 11:06:43.016: INFO: Number of nodes with available pods: 0 Oct 27 11:06:43.016: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:44.021: INFO: Number of nodes with available pods: 0 Oct 27 11:06:44.021: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:45.021: INFO: Number of nodes with available pods: 0 Oct 27 11:06:45.021: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:46.020: INFO: Number of nodes with available pods: 0 Oct 27 11:06:46.020: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:47.029: INFO: Number of nodes with available pods: 0 Oct 27 11:06:47.029: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:48.021: INFO: Number of nodes with available pods: 0 Oct 27 11:06:48.021: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:49.022: INFO: Number of nodes with available pods: 0 Oct 27 11:06:49.022: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:50.134: INFO: Number of nodes with available pods: 0 Oct 27 11:06:50.134: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:51.033: INFO: Number of nodes with available pods: 0 Oct 27 11:06:51.033: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:06:52.021: INFO: Number of nodes with available pods: 1 Oct 27 11:06:52.021: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6418, will wait for the garbage collector to delete the pods Oct 27 11:06:52.089: INFO: Deleting DaemonSet.extensions daemon-set took: 8.110931ms Oct 27 11:06:52.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.207556ms Oct 27 11:06:58.193: INFO: Number of nodes with available pods: 0 Oct 27 11:06:58.193: INFO: Number of running nodes: 0, number of available pods: 0 Oct 27 11:06:58.196: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6418/daemonsets","resourceVersion":"8967605"},"items":null} Oct 27 11:06:58.199: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6418/pods","resourceVersion":"8967605"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:06:58.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6418" for this suite. • [SLOW TEST:21.637 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":104,"skipped":1833,"failed":0} SSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:06:58.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:06:58.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1689" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":105,"skipped":1836,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:06:58.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 27 11:07:03.108: INFO: Successfully updated pod "pod-update-88b9cd96-4787-4ea4-91f8-0938006232e4" STEP: verifying the updated pod is in kubernetes Oct 27 11:07:03.206: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:07:03.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3080" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1858,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:07:03.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 27 11:07:03.348: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:07:18.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7401" for this suite. • [SLOW TEST:14.908 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":107,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:07:18.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:07:18.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e" in namespace "projected-7311" to be "Succeeded or Failed" Oct 27 11:07:18.215: INFO: Pod "downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.779267ms Oct 27 11:07:20.219: INFO: Pod "downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043034255s Oct 27 11:07:22.223: INFO: Pod "downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0469263s STEP: Saw pod success Oct 27 11:07:22.223: INFO: Pod "downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e" satisfied condition "Succeeded or Failed" Oct 27 11:07:22.226: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e container client-container: STEP: delete the pod Oct 27 11:07:23.126: INFO: Waiting for pod downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e to disappear Oct 27 11:07:23.128: INFO: Pod downwardapi-volume-af8400af-c425-4838-adaa-ffbefd229a5e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:07:23.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7311" for this suite. • [SLOW TEST:5.006 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1885,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:07:23.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Oct 27 11:07:27.797: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9440 pod-service-account-bf21087c-512c-42c6-9197-6c1788d0577d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 27 11:07:28.069: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9440 pod-service-account-bf21087c-512c-42c6-9197-6c1788d0577d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 27 11:07:28.281: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9440 pod-service-account-bf21087c-512c-42c6-9197-6c1788d0577d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:07:28.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9440" for this suite. • [SLOW TEST:5.390 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":109,"skipped":1888,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:07:28.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 27 11:07:32.636: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3928 PodName:var-expansion-142bbc75-3d12-4756-b22e-b86fe053e2f5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:07:32.636: INFO: >>> kubeConfig: /root/.kube/config I1027 11:07:32.674257 7 log.go:181] (0xc000e1c420) (0xc002118b40) Create stream I1027 11:07:32.674305 7 log.go:181] (0xc000e1c420) (0xc002118b40) Stream added, broadcasting: 1 I1027 11:07:32.676118 7 log.go:181] (0xc000e1c420) Reply frame received for 1 I1027 11:07:32.676163 7 log.go:181] (0xc000e1c420) (0xc0015808c0) Create stream I1027 11:07:32.676182 7 log.go:181] (0xc000e1c420) (0xc0015808c0) Stream added, broadcasting: 3 I1027 11:07:32.677329 7 log.go:181] (0xc000e1c420) Reply frame received for 3 I1027 11:07:32.677374 7 log.go:181] (0xc000e1c420) (0xc004057ea0) Create stream I1027 11:07:32.677388 7 log.go:181] (0xc000e1c420) (0xc004057ea0) Stream added, broadcasting: 5 I1027 11:07:32.678489 7 log.go:181] (0xc000e1c420) Reply frame received for 5 I1027 11:07:32.784493 7 log.go:181] (0xc000e1c420) Data frame received for 3 I1027 11:07:32.784557 7 log.go:181] (0xc0015808c0) (3) Data frame handling I1027 11:07:32.784587 7 log.go:181] (0xc000e1c420) Data frame received for 5 I1027 11:07:32.784617 7 log.go:181] (0xc004057ea0) (5) Data frame handling I1027 11:07:32.786554 7 log.go:181] (0xc000e1c420) Data frame received for 1 I1027 11:07:32.786575 7 log.go:181] (0xc002118b40) (1) Data frame handling I1027 11:07:32.786588 7 log.go:181] (0xc002118b40) (1) Data frame sent I1027 11:07:32.786597 7 log.go:181] (0xc000e1c420) (0xc002118b40) Stream removed, broadcasting: 1 I1027 11:07:32.786653 7 log.go:181] (0xc000e1c420) (0xc002118b40) Stream removed, broadcasting: 1 I1027 11:07:32.786664 7 log.go:181] (0xc000e1c420) (0xc0015808c0) Stream removed, broadcasting: 3 I1027 11:07:32.786757 7 log.go:181] (0xc000e1c420) (0xc004057ea0) Stream removed, broadcasting: 5 I1027 11:07:32.786849 7 log.go:181] (0xc000e1c420) Go away received STEP: test for file in mounted path Oct 27 11:07:32.790: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3928 PodName:var-expansion-142bbc75-3d12-4756-b22e-b86fe053e2f5 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:07:32.790: INFO: >>> kubeConfig: /root/.kube/config I1027 11:07:32.819067 7 log.go:181] (0xc0006b98c0) (0xc0043d7c20) Create stream I1027 11:07:32.819087 7 log.go:181] (0xc0006b98c0) (0xc0043d7c20) Stream added, broadcasting: 1 I1027 11:07:32.821501 7 log.go:181] (0xc0006b98c0) Reply frame received for 1 I1027 11:07:32.821571 7 log.go:181] (0xc0006b98c0) (0xc001db0000) Create stream I1027 11:07:32.821593 7 log.go:181] (0xc0006b98c0) (0xc001db0000) Stream added, broadcasting: 3 I1027 11:07:32.822573 7 log.go:181] (0xc0006b98c0) Reply frame received for 3 I1027 11:07:32.822604 7 log.go:181] (0xc0006b98c0) (0xc002118be0) Create stream I1027 11:07:32.822618 7 log.go:181] (0xc0006b98c0) (0xc002118be0) Stream added, broadcasting: 5 I1027 11:07:32.823501 7 log.go:181] (0xc0006b98c0) Reply frame received for 5 I1027 11:07:32.895910 7 log.go:181] (0xc0006b98c0) Data frame received for 5 I1027 11:07:32.895964 7 log.go:181] (0xc002118be0) (5) Data frame handling I1027 11:07:32.896009 7 log.go:181] (0xc0006b98c0) Data frame received for 3 I1027 11:07:32.896048 7 log.go:181] (0xc001db0000) (3) Data frame handling I1027 11:07:32.897722 7 log.go:181] (0xc0006b98c0) Data frame received for 1 I1027 11:07:32.897774 7 log.go:181] (0xc0043d7c20) (1) Data frame handling I1027 11:07:32.897803 7 log.go:181] (0xc0043d7c20) (1) Data frame sent I1027 11:07:32.897826 7 log.go:181] (0xc0006b98c0) (0xc0043d7c20) Stream removed, broadcasting: 1 I1027 11:07:32.897856 7 log.go:181] (0xc0006b98c0) Go away received I1027 11:07:32.897967 7 log.go:181] (0xc0006b98c0) (0xc0043d7c20) Stream removed, broadcasting: 1 I1027 11:07:32.897982 7 log.go:181] (0xc0006b98c0) (0xc001db0000) Stream removed, broadcasting: 3 I1027 11:07:32.897992 7 log.go:181] (0xc0006b98c0) (0xc002118be0) Stream removed, broadcasting: 5 STEP: updating the annotation value Oct 27 11:07:33.408: INFO: Successfully updated pod "var-expansion-142bbc75-3d12-4756-b22e-b86fe053e2f5" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 27 11:07:33.436: INFO: Deleting pod "var-expansion-142bbc75-3d12-4756-b22e-b86fe053e2f5" in namespace "var-expansion-3928" Oct 27 11:07:33.440: INFO: Wait up to 5m0s for pod "var-expansion-142bbc75-3d12-4756-b22e-b86fe053e2f5" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:19.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3928" for this suite. • [SLOW TEST:50.953 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":110,"skipped":1903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:19.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:19.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7965" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":111,"skipped":1927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:19.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 27 11:08:19.703: INFO: Waiting up to 5m0s for pod "downward-api-8e983caa-f952-4201-9d59-83ab6e920710" in namespace "downward-api-6856" to be "Succeeded or Failed" Oct 27 11:08:19.726: INFO: Pod "downward-api-8e983caa-f952-4201-9d59-83ab6e920710": Phase="Pending", Reason="", readiness=false. Elapsed: 22.58988ms Oct 27 11:08:21.729: INFO: Pod "downward-api-8e983caa-f952-4201-9d59-83ab6e920710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026532674s Oct 27 11:08:23.734: INFO: Pod "downward-api-8e983caa-f952-4201-9d59-83ab6e920710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031198774s STEP: Saw pod success Oct 27 11:08:23.734: INFO: Pod "downward-api-8e983caa-f952-4201-9d59-83ab6e920710" satisfied condition "Succeeded or Failed" Oct 27 11:08:23.737: INFO: Trying to get logs from node kali-worker pod downward-api-8e983caa-f952-4201-9d59-83ab6e920710 container dapi-container: STEP: delete the pod Oct 27 11:08:23.775: INFO: Waiting for pod downward-api-8e983caa-f952-4201-9d59-83ab6e920710 to disappear Oct 27 11:08:23.781: INFO: Pod downward-api-8e983caa-f952-4201-9d59-83ab6e920710 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:23.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6856" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1980,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:23.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:08:23.913: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"88b6b880-6d49-4198-8d86-e26f6d01d1a5", Controller:(*bool)(0xc00336d702), BlockOwnerDeletion:(*bool)(0xc00336d703)}} Oct 27 11:08:24.001: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"86984be5-e512-4a49-8137-5ae000911d5e", Controller:(*bool)(0xc004429e86), BlockOwnerDeletion:(*bool)(0xc004429e87)}} Oct 27 11:08:24.017: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3f111124-db4f-4d4d-840d-25da60524027", Controller:(*bool)(0xc00440d236), BlockOwnerDeletion:(*bool)(0xc00440d237)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:29.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-869" for this suite. • [SLOW TEST:5.304 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":113,"skipped":1984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:29.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:29.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2774" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":114,"skipped":2014,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:29.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:44.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1897" for this suite. STEP: Destroying namespace "nsdeletetest-7101" for this suite. Oct 27 11:08:44.454: INFO: Namespace nsdeletetest-7101 was already deleted STEP: Destroying namespace "nsdeletetest-4718" for this suite. • [SLOW TEST:15.307 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":115,"skipped":2027,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:44.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 27 11:08:44.554: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4693 /api/v1/namespaces/watch-4693/configmaps/e2e-watch-test-watch-closed 5fbf11c7-9899-4fb0-b3c5-1475357247ec 8968213 0 2020-10-27 11:08:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-27 11:08:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 11:08:44.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4693 /api/v1/namespaces/watch-4693/configmaps/e2e-watch-test-watch-closed 5fbf11c7-9899-4fb0-b3c5-1475357247ec 8968214 0 2020-10-27 11:08:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-27 11:08:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 27 11:08:44.603: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4693 /api/v1/namespaces/watch-4693/configmaps/e2e-watch-test-watch-closed 5fbf11c7-9899-4fb0-b3c5-1475357247ec 8968215 0 2020-10-27 11:08:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-27 11:08:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 11:08:44.604: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4693 /api/v1/namespaces/watch-4693/configmaps/e2e-watch-test-watch-closed 5fbf11c7-9899-4fb0-b3c5-1475357247ec 8968216 0 2020-10-27 11:08:44 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-27 11:08:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:44.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4693" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":116,"skipped":2033,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:44.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 27 11:08:44.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6637' Oct 27 11:08:44.796: INFO: stderr: "" Oct 27 11:08:44.796: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 27 11:08:49.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6637 -o json' Oct 27 11:08:49.957: INFO: stderr: "" Oct 27 11:08:49.957: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-27T11:08:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-27T11:08:44Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.92\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-27T11:08:48Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6637\",\n \"resourceVersion\": \"8968241\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6637/pods/e2e-test-httpd-pod\",\n \"uid\": \"0d974710-38aa-480b-87ea-96101eedbbb7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sn7rm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sn7rm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sn7rm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-27T11:08:44Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-27T11:08:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-27T11:08:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-27T11:08:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://98f50888649a712d17f8d5f595f75593deebd5300bfc723cf46050f3f7572f53\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-10-27T11:08:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.92\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.92\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-27T11:08:44Z\"\n }\n}\n" STEP: replace the image in the pod Oct 27 11:08:49.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6637' Oct 27 11:08:50.334: INFO: stderr: "" Oct 27 11:08:50.334: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Oct 27 11:08:50.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6637' Oct 27 11:08:58.662: INFO: stderr: "" Oct 27 11:08:58.662: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:08:58.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6637" for this suite. • [SLOW TEST:14.063 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":117,"skipped":2036,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:08:58.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 27 11:09:06.824: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 27 11:09:06.833: INFO: Pod pod-with-prestop-http-hook still exists Oct 27 11:09:08.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 27 11:09:08.837: INFO: Pod pod-with-prestop-http-hook still exists Oct 27 11:09:10.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 27 11:09:10.837: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:10.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2299" for this suite. • [SLOW TEST:12.178 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":2042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:10.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:09:10.926: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:11.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8916" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":119,"skipped":2094,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:11.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 27 11:09:12.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7565' Oct 27 11:09:15.517: INFO: stderr: "" Oct 27 11:09:15.517: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 27 11:09:15.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:15.741: INFO: stderr: "" Oct 27 11:09:15.741: INFO: stdout: "update-demo-nautilus-8vjcf update-demo-nautilus-pthv8 " Oct 27 11:09:15.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vjcf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:15.850: INFO: stderr: "" Oct 27 11:09:15.850: INFO: stdout: "" Oct 27 11:09:15.850: INFO: update-demo-nautilus-8vjcf is created but not running Oct 27 11:09:20.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:20.969: INFO: stderr: "" Oct 27 11:09:20.969: INFO: stdout: "update-demo-nautilus-8vjcf update-demo-nautilus-pthv8 " Oct 27 11:09:20.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vjcf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:21.070: INFO: stderr: "" Oct 27 11:09:21.070: INFO: stdout: "true" Oct 27 11:09:21.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vjcf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:21.175: INFO: stderr: "" Oct 27 11:09:21.175: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:09:21.175: INFO: validating pod update-demo-nautilus-8vjcf Oct 27 11:09:21.179: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:09:21.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:09:21.179: INFO: update-demo-nautilus-8vjcf is verified up and running Oct 27 11:09:21.179: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:21.274: INFO: stderr: "" Oct 27 11:09:21.274: INFO: stdout: "true" Oct 27 11:09:21.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:21.377: INFO: stderr: "" Oct 27 11:09:21.378: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:09:21.378: INFO: validating pod update-demo-nautilus-pthv8 Oct 27 11:09:21.381: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:09:21.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:09:21.381: INFO: update-demo-nautilus-pthv8 is verified up and running STEP: scaling down the replication controller Oct 27 11:09:21.384: INFO: scanned /root for discovery docs: Oct 27 11:09:21.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7565' Oct 27 11:09:22.511: INFO: stderr: "" Oct 27 11:09:22.511: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 27 11:09:22.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:22.621: INFO: stderr: "" Oct 27 11:09:22.621: INFO: stdout: "update-demo-nautilus-8vjcf update-demo-nautilus-pthv8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 27 11:09:27.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:27.754: INFO: stderr: "" Oct 27 11:09:27.754: INFO: stdout: "update-demo-nautilus-pthv8 " Oct 27 11:09:27.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:27.847: INFO: stderr: "" Oct 27 11:09:27.847: INFO: stdout: "true" Oct 27 11:09:27.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:27.947: INFO: stderr: "" Oct 27 11:09:27.947: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:09:27.947: INFO: validating pod update-demo-nautilus-pthv8 Oct 27 11:09:27.950: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:09:27.950: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:09:27.950: INFO: update-demo-nautilus-pthv8 is verified up and running STEP: scaling up the replication controller Oct 27 11:09:27.952: INFO: scanned /root for discovery docs: Oct 27 11:09:27.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7565' Oct 27 11:09:29.128: INFO: stderr: "" Oct 27 11:09:29.128: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 27 11:09:29.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:29.249: INFO: stderr: "" Oct 27 11:09:29.249: INFO: stdout: "update-demo-nautilus-cp5xl update-demo-nautilus-pthv8 " Oct 27 11:09:29.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp5xl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:29.361: INFO: stderr: "" Oct 27 11:09:29.361: INFO: stdout: "" Oct 27 11:09:29.361: INFO: update-demo-nautilus-cp5xl is created but not running Oct 27 11:09:34.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7565' Oct 27 11:09:34.475: INFO: stderr: "" Oct 27 11:09:34.475: INFO: stdout: "update-demo-nautilus-cp5xl update-demo-nautilus-pthv8 " Oct 27 11:09:34.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp5xl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:34.584: INFO: stderr: "" Oct 27 11:09:34.584: INFO: stdout: "true" Oct 27 11:09:34.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cp5xl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:34.695: INFO: stderr: "" Oct 27 11:09:34.695: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:09:34.695: INFO: validating pod update-demo-nautilus-cp5xl Oct 27 11:09:34.699: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:09:34.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:09:34.700: INFO: update-demo-nautilus-cp5xl is verified up and running Oct 27 11:09:34.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:34.799: INFO: stderr: "" Oct 27 11:09:34.799: INFO: stdout: "true" Oct 27 11:09:34.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pthv8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7565' Oct 27 11:09:34.906: INFO: stderr: "" Oct 27 11:09:34.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:09:34.906: INFO: validating pod update-demo-nautilus-pthv8 Oct 27 11:09:34.910: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:09:34.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:09:34.910: INFO: update-demo-nautilus-pthv8 is verified up and running STEP: using delete to clean up resources Oct 27 11:09:34.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7565' Oct 27 11:09:35.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:09:35.028: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 27 11:09:35.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7565' Oct 27 11:09:35.128: INFO: stderr: "No resources found in kubectl-7565 namespace.\n" Oct 27 11:09:35.128: INFO: stdout: "" Oct 27 11:09:35.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7565 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 27 11:09:35.238: INFO: stderr: "" Oct 27 11:09:35.238: INFO: stdout: "update-demo-nautilus-cp5xl\nupdate-demo-nautilus-pthv8\n" Oct 27 11:09:35.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7565' Oct 27 11:09:35.922: INFO: stderr: "No resources found in kubectl-7565 namespace.\n" Oct 27 11:09:35.922: INFO: stdout: "" Oct 27 11:09:35.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7565 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 27 11:09:36.051: INFO: stderr: "" Oct 27 11:09:36.052: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:36.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7565" for this suite. • [SLOW TEST:24.418 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":120,"skipped":2112,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:36.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 27 11:09:36.461: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 27 11:09:41.474: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:41.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5033" for this suite. • [SLOW TEST:5.473 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":121,"skipped":2125,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:41.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-bdd8e7ae-459f-441e-8cb1-f3e9c45eddde STEP: Creating a pod to test consume secrets Oct 27 11:09:42.246: INFO: Waiting up to 5m0s for pod "pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84" in namespace "secrets-1096" to be "Succeeded or Failed" Oct 27 11:09:42.312: INFO: Pod "pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84": Phase="Pending", Reason="", readiness=false. Elapsed: 66.421011ms Oct 27 11:09:44.396: INFO: Pod "pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150344425s Oct 27 11:09:46.400: INFO: Pod "pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154367998s STEP: Saw pod success Oct 27 11:09:46.400: INFO: Pod "pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84" satisfied condition "Succeeded or Failed" Oct 27 11:09:46.403: INFO: Trying to get logs from node kali-worker pod pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84 container secret-volume-test: STEP: delete the pod Oct 27 11:09:46.590: INFO: Waiting for pod pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84 to disappear Oct 27 11:09:46.637: INFO: Pod pod-secrets-8083ac0d-86e4-4e0d-a0a7-aa36d36a3e84 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:46.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1096" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2127,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:46.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2e94be09-0310-4b34-9163-a23461f77ab3 STEP: Creating a pod to test consume configMaps Oct 27 11:09:46.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e" in namespace "configmap-4484" to be "Succeeded or Failed" Oct 27 11:09:46.810: INFO: Pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.600071ms Oct 27 11:09:48.911: INFO: Pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104922351s Oct 27 11:09:50.915: INFO: Pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e": Phase="Running", Reason="", readiness=true. Elapsed: 4.109112499s Oct 27 11:09:52.919: INFO: Pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113093982s STEP: Saw pod success Oct 27 11:09:52.919: INFO: Pod "pod-configmaps-d30260f9-5000-4643-984b-67124774852e" satisfied condition "Succeeded or Failed" Oct 27 11:09:52.925: INFO: Trying to get logs from node kali-worker pod pod-configmaps-d30260f9-5000-4643-984b-67124774852e container configmap-volume-test: STEP: delete the pod Oct 27 11:09:52.958: INFO: Waiting for pod pod-configmaps-d30260f9-5000-4643-984b-67124774852e to disappear Oct 27 11:09:52.976: INFO: Pod pod-configmaps-d30260f9-5000-4643-984b-67124774852e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:52.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4484" for this suite. • [SLOW TEST:6.364 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2149,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:53.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:09:53.136: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3358" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":124,"skipped":2169,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:53.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:09:53.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262" in namespace "downward-api-3443" to be "Succeeded or Failed" Oct 27 11:09:53.929: INFO: Pod "downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262": Phase="Pending", Reason="", readiness=false. Elapsed: 47.37393ms Oct 27 11:09:55.932: INFO: Pod "downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050231272s Oct 27 11:09:57.936: INFO: Pod "downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054588237s STEP: Saw pod success Oct 27 11:09:57.936: INFO: Pod "downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262" satisfied condition "Succeeded or Failed" Oct 27 11:09:57.939: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262 container client-container: STEP: delete the pod Oct 27 11:09:58.105: INFO: Waiting for pod downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262 to disappear Oct 27 11:09:58.115: INFO: Pod downwardapi-volume-2bc371a2-e6a4-4477-8200-d125e91ce262 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:09:58.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3443" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:09:58.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:09:59.220: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:10:01.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:10:03.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739393799, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:10:06.433: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:10:06.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-19" for this suite. STEP: Destroying namespace "webhook-19-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.510 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":126,"skipped":2228,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:10:06.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:10:37.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2474" for this suite. • [SLOW TEST:30.828 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":127,"skipped":2248,"failed":0} [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:10:37.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 27 11:10:37.529: INFO: Waiting up to 1m0s for all nodes to be ready Oct 27 11:11:37.551: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:11:37.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 27 11:11:41.673: INFO: found a healthy node: kali-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:11:54.406: INFO: pods created so far: [1 1 1] Oct 27 11:11:54.406: INFO: length of pods created so far: 3 Oct 27 11:12:12.416: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:19.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1215" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:19.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9213" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:102.121 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":128,"skipped":2248,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:12:19.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:23.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1198" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":129,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:12:23.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Oct 27 11:12:23.745: INFO: created test-pod-1 Oct 27 11:12:23.750: INFO: created test-pod-2 Oct 27 11:12:23.770: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:24.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2662" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":130,"skipped":2322,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:12:24.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4869d2de-ebd8-4a8f-bb83-92ad347418d1 STEP: Creating a pod to test consume secrets Oct 27 11:12:24.190: INFO: Waiting up to 5m0s for pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830" in namespace "secrets-5685" to be "Succeeded or Failed" Oct 27 11:12:24.209: INFO: Pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830": Phase="Pending", Reason="", readiness=false. Elapsed: 19.072862ms Oct 27 11:12:26.291: INFO: Pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10083732s Oct 27 11:12:28.295: INFO: Pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104381393s Oct 27 11:12:30.422: INFO: Pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231655141s STEP: Saw pod success Oct 27 11:12:30.422: INFO: Pod "pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830" satisfied condition "Succeeded or Failed" Oct 27 11:12:30.425: INFO: Trying to get logs from node kali-worker pod pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830 container secret-volume-test: STEP: delete the pod Oct 27 11:12:30.455: INFO: Waiting for pod pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830 to disappear Oct 27 11:12:30.460: INFO: Pod pod-secrets-4735ff42-37af-431f-b8ee-ab3f378b7830 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:30.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5685" for this suite. • [SLOW TEST:6.378 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2327,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:12:30.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-6dgx STEP: Creating a pod to test atomic-volume-subpath Oct 27 11:12:30.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6dgx" in namespace "subpath-642" to be "Succeeded or Failed" Oct 27 11:12:30.622: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Pending", Reason="", readiness=false. Elapsed: 28.169955ms Oct 27 11:12:32.626: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031988209s Oct 27 11:12:34.630: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 4.03656244s Oct 27 11:12:36.636: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 6.041818116s Oct 27 11:12:38.640: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 8.045964536s Oct 27 11:12:40.645: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 10.051396237s Oct 27 11:12:42.649: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 12.055118481s Oct 27 11:12:44.652: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 14.058391243s Oct 27 11:12:46.657: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 16.063484149s Oct 27 11:12:48.661: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 18.067328543s Oct 27 11:12:50.666: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 20.072247596s Oct 27 11:12:52.670: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Running", Reason="", readiness=true. Elapsed: 22.076391748s Oct 27 11:12:54.678: INFO: Pod "pod-subpath-test-projected-6dgx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084280856s STEP: Saw pod success Oct 27 11:12:54.678: INFO: Pod "pod-subpath-test-projected-6dgx" satisfied condition "Succeeded or Failed" Oct 27 11:12:54.681: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-6dgx container test-container-subpath-projected-6dgx: STEP: delete the pod Oct 27 11:12:54.695: INFO: Waiting for pod pod-subpath-test-projected-6dgx to disappear Oct 27 11:12:54.699: INFO: Pod pod-subpath-test-projected-6dgx no longer exists STEP: Deleting pod pod-subpath-test-projected-6dgx Oct 27 11:12:54.699: INFO: Deleting pod "pod-subpath-test-projected-6dgx" in namespace "subpath-642" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:12:54.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-642" for this suite. • [SLOW TEST:24.227 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":132,"skipped":2327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:12:54.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 27 11:12:54.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9437' Oct 27 11:12:55.236: INFO: stderr: "" Oct 27 11:12:55.236: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 27 11:12:55.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9437' Oct 27 11:12:55.387: INFO: stderr: "" Oct 27 11:12:55.387: INFO: stdout: "update-demo-nautilus-gplj2 update-demo-nautilus-kpq5h " Oct 27 11:12:55.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gplj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9437' Oct 27 11:12:55.508: INFO: stderr: "" Oct 27 11:12:55.508: INFO: stdout: "" Oct 27 11:12:55.508: INFO: update-demo-nautilus-gplj2 is created but not running Oct 27 11:13:00.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9437' Oct 27 11:13:00.615: INFO: stderr: "" Oct 27 11:13:00.615: INFO: stdout: "update-demo-nautilus-gplj2 update-demo-nautilus-kpq5h " Oct 27 11:13:00.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gplj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9437' Oct 27 11:13:00.721: INFO: stderr: "" Oct 27 11:13:00.722: INFO: stdout: "true" Oct 27 11:13:00.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gplj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9437' Oct 27 11:13:00.826: INFO: stderr: "" Oct 27 11:13:00.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:13:00.826: INFO: validating pod update-demo-nautilus-gplj2 Oct 27 11:13:00.848: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:13:00.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:13:00.849: INFO: update-demo-nautilus-gplj2 is verified up and running Oct 27 11:13:00.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpq5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9437' Oct 27 11:13:00.949: INFO: stderr: "" Oct 27 11:13:00.949: INFO: stdout: "true" Oct 27 11:13:00.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpq5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9437' Oct 27 11:13:01.051: INFO: stderr: "" Oct 27 11:13:01.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 27 11:13:01.051: INFO: validating pod update-demo-nautilus-kpq5h Oct 27 11:13:01.055: INFO: got data: { "image": "nautilus.jpg" } Oct 27 11:13:01.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 27 11:13:01.055: INFO: update-demo-nautilus-kpq5h is verified up and running STEP: using delete to clean up resources Oct 27 11:13:01.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9437' Oct 27 11:13:01.179: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:13:01.180: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 27 11:13:01.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9437' Oct 27 11:13:01.279: INFO: stderr: "No resources found in kubectl-9437 namespace.\n" Oct 27 11:13:01.279: INFO: stdout: "" Oct 27 11:13:01.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 27 11:13:01.385: INFO: stderr: "" Oct 27 11:13:01.385: INFO: stdout: "update-demo-nautilus-gplj2\nupdate-demo-nautilus-kpq5h\n" Oct 27 11:13:01.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9437' Oct 27 11:13:02.019: INFO: stderr: "No resources found in kubectl-9437 namespace.\n" Oct 27 11:13:02.019: INFO: stdout: "" Oct 27 11:13:02.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9437 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 27 11:13:02.117: INFO: stderr: "" Oct 27 11:13:02.117: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:02.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9437" for this suite. • [SLOW TEST:7.415 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":133,"skipped":2352,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:02.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:13:02.833: INFO: Creating ReplicaSet my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb Oct 27 11:13:02.898: INFO: Pod name my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb: Found 0 pods out of 1 Oct 27 11:13:07.903: INFO: Pod name my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb: Found 1 pods out of 1 Oct 27 11:13:07.903: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb" is running Oct 27 11:13:07.906: INFO: Pod "my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb-kvc8m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:13:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:13:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:13:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:13:02 +0000 UTC Reason: Message:}]) Oct 27 11:13:07.907: INFO: Trying to dial the pod Oct 27 11:13:12.919: INFO: Controller my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb: Got expected result from replica 1 [my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb-kvc8m]: "my-hostname-basic-1d3b8195-14c2-47f4-8003-93a15a4bb6fb-kvc8m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:12.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9003" for this suite. • [SLOW TEST:10.802 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":134,"skipped":2352,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:12.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:13:12.996: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 27 11:13:14.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6610 create -f -' Oct 27 11:13:18.445: INFO: stderr: "" Oct 27 11:13:18.445: INFO: stdout: "e2e-test-crd-publish-openapi-4802-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 27 11:13:18.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6610 delete e2e-test-crd-publish-openapi-4802-crds test-cr' Oct 27 11:13:18.569: INFO: stderr: "" Oct 27 11:13:18.569: INFO: stdout: "e2e-test-crd-publish-openapi-4802-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 27 11:13:18.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6610 apply -f -' Oct 27 11:13:18.856: INFO: stderr: "" Oct 27 11:13:18.856: INFO: stdout: "e2e-test-crd-publish-openapi-4802-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 27 11:13:18.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6610 delete e2e-test-crd-publish-openapi-4802-crds test-cr' Oct 27 11:13:18.970: INFO: stderr: "" Oct 27 11:13:18.970: INFO: stdout: "e2e-test-crd-publish-openapi-4802-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 27 11:13:18.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4802-crds' Oct 27 11:13:19.331: INFO: stderr: "" Oct 27 11:13:19.331: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4802-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:22.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6610" for this suite. • [SLOW TEST:9.383 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":135,"skipped":2369,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:22.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:22.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8920" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":136,"skipped":2384,"failed":0} SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:22.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 27 11:13:32.558: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:32.558: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:32.594668 7 log.go:181] (0xc000e1c420) (0xc00424d220) Create stream I1027 11:13:32.594708 7 log.go:181] (0xc000e1c420) (0xc00424d220) Stream added, broadcasting: 1 I1027 11:13:32.596756 7 log.go:181] (0xc000e1c420) Reply frame received for 1 I1027 11:13:32.596812 7 log.go:181] (0xc000e1c420) (0xc00048d0e0) Create stream I1027 11:13:32.596831 7 log.go:181] (0xc000e1c420) (0xc00048d0e0) Stream added, broadcasting: 3 I1027 11:13:32.597825 7 log.go:181] (0xc000e1c420) Reply frame received for 3 I1027 11:13:32.597855 7 log.go:181] (0xc000e1c420) (0xc003e23400) Create stream I1027 11:13:32.597864 7 log.go:181] (0xc000e1c420) (0xc003e23400) Stream added, broadcasting: 5 I1027 11:13:32.598623 7 log.go:181] (0xc000e1c420) Reply frame received for 5 I1027 11:13:32.686449 7 log.go:181] (0xc000e1c420) Data frame received for 3 I1027 11:13:32.686504 7 log.go:181] (0xc00048d0e0) (3) Data frame handling I1027 11:13:32.686526 7 log.go:181] (0xc00048d0e0) (3) Data frame sent I1027 11:13:32.686542 7 log.go:181] (0xc000e1c420) Data frame received for 3 I1027 11:13:32.686555 7 log.go:181] (0xc00048d0e0) (3) Data frame handling I1027 11:13:32.686584 7 log.go:181] (0xc000e1c420) Data frame received for 5 I1027 11:13:32.686615 7 log.go:181] (0xc003e23400) (5) Data frame handling I1027 11:13:32.692081 7 log.go:181] (0xc000e1c420) Data frame received for 1 I1027 11:13:32.692103 7 log.go:181] (0xc00424d220) (1) Data frame handling I1027 11:13:32.692115 7 log.go:181] (0xc00424d220) (1) Data frame sent I1027 11:13:32.692126 7 log.go:181] (0xc000e1c420) (0xc00424d220) Stream removed, broadcasting: 1 I1027 11:13:32.692141 7 log.go:181] (0xc000e1c420) Go away received I1027 11:13:32.692267 7 log.go:181] (0xc000e1c420) (0xc00424d220) Stream removed, broadcasting: 1 I1027 11:13:32.692292 7 log.go:181] (0xc000e1c420) (0xc00048d0e0) Stream removed, broadcasting: 3 I1027 11:13:32.692311 7 log.go:181] (0xc000e1c420) (0xc003e23400) Stream removed, broadcasting: 5 Oct 27 11:13:32.692: INFO: Exec stderr: "" Oct 27 11:13:32.692: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:32.692: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:32.728135 7 log.go:181] (0xc000e1ce70) (0xc00424d4a0) Create stream I1027 11:13:32.728176 7 log.go:181] (0xc000e1ce70) (0xc00424d4a0) Stream added, broadcasting: 1 I1027 11:13:32.730413 7 log.go:181] (0xc000e1ce70) Reply frame received for 1 I1027 11:13:32.730478 7 log.go:181] (0xc000e1ce70) (0xc00424d540) Create stream I1027 11:13:32.730500 7 log.go:181] (0xc000e1ce70) (0xc00424d540) Stream added, broadcasting: 3 I1027 11:13:32.731471 7 log.go:181] (0xc000e1ce70) Reply frame received for 3 I1027 11:13:32.731506 7 log.go:181] (0xc000e1ce70) (0xc00424d5e0) Create stream I1027 11:13:32.731522 7 log.go:181] (0xc000e1ce70) (0xc00424d5e0) Stream added, broadcasting: 5 I1027 11:13:32.732672 7 log.go:181] (0xc000e1ce70) Reply frame received for 5 I1027 11:13:32.795828 7 log.go:181] (0xc000e1ce70) Data frame received for 5 I1027 11:13:32.795873 7 log.go:181] (0xc00424d5e0) (5) Data frame handling I1027 11:13:32.795904 7 log.go:181] (0xc000e1ce70) Data frame received for 3 I1027 11:13:32.795918 7 log.go:181] (0xc00424d540) (3) Data frame handling I1027 11:13:32.795931 7 log.go:181] (0xc00424d540) (3) Data frame sent I1027 11:13:32.796040 7 log.go:181] (0xc000e1ce70) Data frame received for 3 I1027 11:13:32.796064 7 log.go:181] (0xc00424d540) (3) Data frame handling I1027 11:13:32.797376 7 log.go:181] (0xc000e1ce70) Data frame received for 1 I1027 11:13:32.797394 7 log.go:181] (0xc00424d4a0) (1) Data frame handling I1027 11:13:32.797409 7 log.go:181] (0xc00424d4a0) (1) Data frame sent I1027 11:13:32.797419 7 log.go:181] (0xc000e1ce70) (0xc00424d4a0) Stream removed, broadcasting: 1 I1027 11:13:32.797431 7 log.go:181] (0xc000e1ce70) Go away received I1027 11:13:32.797589 7 log.go:181] (0xc000e1ce70) (0xc00424d4a0) Stream removed, broadcasting: 1 I1027 11:13:32.797609 7 log.go:181] (0xc000e1ce70) (0xc00424d540) Stream removed, broadcasting: 3 I1027 11:13:32.797618 7 log.go:181] (0xc000e1ce70) (0xc00424d5e0) Stream removed, broadcasting: 5 Oct 27 11:13:32.797: INFO: Exec stderr: "" Oct 27 11:13:32.797: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:32.797: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:32.830581 7 log.go:181] (0xc000028420) (0xc000695e00) Create stream I1027 11:13:32.830611 7 log.go:181] (0xc000028420) (0xc000695e00) Stream added, broadcasting: 1 I1027 11:13:32.832334 7 log.go:181] (0xc000028420) Reply frame received for 1 I1027 11:13:32.832367 7 log.go:181] (0xc000028420) (0xc00293b4a0) Create stream I1027 11:13:32.832378 7 log.go:181] (0xc000028420) (0xc00293b4a0) Stream added, broadcasting: 3 I1027 11:13:32.833391 7 log.go:181] (0xc000028420) Reply frame received for 3 I1027 11:13:32.833437 7 log.go:181] (0xc000028420) (0xc003e23540) Create stream I1027 11:13:32.833451 7 log.go:181] (0xc000028420) (0xc003e23540) Stream added, broadcasting: 5 I1027 11:13:32.834410 7 log.go:181] (0xc000028420) Reply frame received for 5 I1027 11:13:32.908391 7 log.go:181] (0xc000028420) Data frame received for 5 I1027 11:13:32.908428 7 log.go:181] (0xc003e23540) (5) Data frame handling I1027 11:13:32.908453 7 log.go:181] (0xc000028420) Data frame received for 3 I1027 11:13:32.908469 7 log.go:181] (0xc00293b4a0) (3) Data frame handling I1027 11:13:32.908485 7 log.go:181] (0xc00293b4a0) (3) Data frame sent I1027 11:13:32.908505 7 log.go:181] (0xc000028420) Data frame received for 3 I1027 11:13:32.908518 7 log.go:181] (0xc00293b4a0) (3) Data frame handling I1027 11:13:32.910345 7 log.go:181] (0xc000028420) Data frame received for 1 I1027 11:13:32.910383 7 log.go:181] (0xc000695e00) (1) Data frame handling I1027 11:13:32.910423 7 log.go:181] (0xc000695e00) (1) Data frame sent I1027 11:13:32.910513 7 log.go:181] (0xc000028420) (0xc000695e00) Stream removed, broadcasting: 1 I1027 11:13:32.910572 7 log.go:181] (0xc000028420) Go away received I1027 11:13:32.910698 7 log.go:181] (0xc000028420) (0xc000695e00) Stream removed, broadcasting: 1 I1027 11:13:32.910736 7 log.go:181] (0xc000028420) (0xc00293b4a0) Stream removed, broadcasting: 3 I1027 11:13:32.910763 7 log.go:181] (0xc000028420) (0xc003e23540) Stream removed, broadcasting: 5 Oct 27 11:13:32.910: INFO: Exec stderr: "" Oct 27 11:13:32.910: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:32.910: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:32.942534 7 log.go:181] (0xc000e1d6b0) (0xc00424d860) Create stream I1027 11:13:32.942568 7 log.go:181] (0xc000e1d6b0) (0xc00424d860) Stream added, broadcasting: 1 I1027 11:13:32.945077 7 log.go:181] (0xc000e1d6b0) Reply frame received for 1 I1027 11:13:32.945176 7 log.go:181] (0xc000e1d6b0) (0xc00048d180) Create stream I1027 11:13:32.945194 7 log.go:181] (0xc000e1d6b0) (0xc00048d180) Stream added, broadcasting: 3 I1027 11:13:32.946293 7 log.go:181] (0xc000e1d6b0) Reply frame received for 3 I1027 11:13:32.946334 7 log.go:181] (0xc000e1d6b0) (0xc00293b540) Create stream I1027 11:13:32.946351 7 log.go:181] (0xc000e1d6b0) (0xc00293b540) Stream added, broadcasting: 5 I1027 11:13:32.947545 7 log.go:181] (0xc000e1d6b0) Reply frame received for 5 I1027 11:13:33.012273 7 log.go:181] (0xc000e1d6b0) Data frame received for 3 I1027 11:13:33.012332 7 log.go:181] (0xc00048d180) (3) Data frame handling I1027 11:13:33.012373 7 log.go:181] (0xc00048d180) (3) Data frame sent I1027 11:13:33.012389 7 log.go:181] (0xc000e1d6b0) Data frame received for 3 I1027 11:13:33.012408 7 log.go:181] (0xc00048d180) (3) Data frame handling I1027 11:13:33.012434 7 log.go:181] (0xc000e1d6b0) Data frame received for 5 I1027 11:13:33.012448 7 log.go:181] (0xc00293b540) (5) Data frame handling I1027 11:13:33.014469 7 log.go:181] (0xc000e1d6b0) Data frame received for 1 I1027 11:13:33.014513 7 log.go:181] (0xc00424d860) (1) Data frame handling I1027 11:13:33.014539 7 log.go:181] (0xc00424d860) (1) Data frame sent I1027 11:13:33.014556 7 log.go:181] (0xc000e1d6b0) (0xc00424d860) Stream removed, broadcasting: 1 I1027 11:13:33.014571 7 log.go:181] (0xc000e1d6b0) Go away received I1027 11:13:33.014731 7 log.go:181] (0xc000e1d6b0) (0xc00424d860) Stream removed, broadcasting: 1 I1027 11:13:33.014772 7 log.go:181] (0xc000e1d6b0) (0xc00048d180) Stream removed, broadcasting: 3 I1027 11:13:33.014795 7 log.go:181] (0xc000e1d6b0) (0xc00293b540) Stream removed, broadcasting: 5 Oct 27 11:13:33.014: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 27 11:13:33.014: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.014: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.044387 7 log.go:181] (0xc003720790) (0xc00293b860) Create stream I1027 11:13:33.044420 7 log.go:181] (0xc003720790) (0xc00293b860) Stream added, broadcasting: 1 I1027 11:13:33.046545 7 log.go:181] (0xc003720790) Reply frame received for 1 I1027 11:13:33.046591 7 log.go:181] (0xc003720790) (0xc00048d220) Create stream I1027 11:13:33.046611 7 log.go:181] (0xc003720790) (0xc00048d220) Stream added, broadcasting: 3 I1027 11:13:33.047539 7 log.go:181] (0xc003720790) Reply frame received for 3 I1027 11:13:33.047582 7 log.go:181] (0xc003720790) (0xc00048d2c0) Create stream I1027 11:13:33.047598 7 log.go:181] (0xc003720790) (0xc00048d2c0) Stream added, broadcasting: 5 I1027 11:13:33.048584 7 log.go:181] (0xc003720790) Reply frame received for 5 I1027 11:13:33.129030 7 log.go:181] (0xc003720790) Data frame received for 5 I1027 11:13:33.129065 7 log.go:181] (0xc00048d2c0) (5) Data frame handling I1027 11:13:33.129105 7 log.go:181] (0xc003720790) Data frame received for 3 I1027 11:13:33.129156 7 log.go:181] (0xc00048d220) (3) Data frame handling I1027 11:13:33.129179 7 log.go:181] (0xc00048d220) (3) Data frame sent I1027 11:13:33.129194 7 log.go:181] (0xc003720790) Data frame received for 3 I1027 11:13:33.129208 7 log.go:181] (0xc00048d220) (3) Data frame handling I1027 11:13:33.130505 7 log.go:181] (0xc003720790) Data frame received for 1 I1027 11:13:33.130531 7 log.go:181] (0xc00293b860) (1) Data frame handling I1027 11:13:33.130550 7 log.go:181] (0xc00293b860) (1) Data frame sent I1027 11:13:33.130572 7 log.go:181] (0xc003720790) (0xc00293b860) Stream removed, broadcasting: 1 I1027 11:13:33.130589 7 log.go:181] (0xc003720790) Go away received I1027 11:13:33.130683 7 log.go:181] (0xc003720790) (0xc00293b860) Stream removed, broadcasting: 1 I1027 11:13:33.130707 7 log.go:181] (0xc003720790) (0xc00048d220) Stream removed, broadcasting: 3 I1027 11:13:33.130723 7 log.go:181] (0xc003720790) (0xc00048d2c0) Stream removed, broadcasting: 5 Oct 27 11:13:33.130: INFO: Exec stderr: "" Oct 27 11:13:33.130: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.130: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.172942 7 log.go:181] (0xc000e1dd90) (0xc00424dae0) Create stream I1027 11:13:33.172977 7 log.go:181] (0xc000e1dd90) (0xc00424dae0) Stream added, broadcasting: 1 I1027 11:13:33.174952 7 log.go:181] (0xc000e1dd90) Reply frame received for 1 I1027 11:13:33.174984 7 log.go:181] (0xc000e1dd90) (0xc003e23680) Create stream I1027 11:13:33.174994 7 log.go:181] (0xc000e1dd90) (0xc003e23680) Stream added, broadcasting: 3 I1027 11:13:33.175787 7 log.go:181] (0xc000e1dd90) Reply frame received for 3 I1027 11:13:33.175826 7 log.go:181] (0xc000e1dd90) (0xc000695f40) Create stream I1027 11:13:33.175839 7 log.go:181] (0xc000e1dd90) (0xc000695f40) Stream added, broadcasting: 5 I1027 11:13:33.176994 7 log.go:181] (0xc000e1dd90) Reply frame received for 5 I1027 11:13:33.235400 7 log.go:181] (0xc000e1dd90) Data frame received for 5 I1027 11:13:33.235444 7 log.go:181] (0xc000695f40) (5) Data frame handling I1027 11:13:33.235466 7 log.go:181] (0xc000e1dd90) Data frame received for 3 I1027 11:13:33.235487 7 log.go:181] (0xc003e23680) (3) Data frame handling I1027 11:13:33.235511 7 log.go:181] (0xc003e23680) (3) Data frame sent I1027 11:13:33.235530 7 log.go:181] (0xc000e1dd90) Data frame received for 3 I1027 11:13:33.235546 7 log.go:181] (0xc003e23680) (3) Data frame handling I1027 11:13:33.237469 7 log.go:181] (0xc000e1dd90) Data frame received for 1 I1027 11:13:33.237495 7 log.go:181] (0xc00424dae0) (1) Data frame handling I1027 11:13:33.237525 7 log.go:181] (0xc00424dae0) (1) Data frame sent I1027 11:13:33.237534 7 log.go:181] (0xc000e1dd90) (0xc00424dae0) Stream removed, broadcasting: 1 I1027 11:13:33.237545 7 log.go:181] (0xc000e1dd90) Go away received I1027 11:13:33.237619 7 log.go:181] (0xc000e1dd90) (0xc00424dae0) Stream removed, broadcasting: 1 I1027 11:13:33.237639 7 log.go:181] (0xc000e1dd90) (0xc003e23680) Stream removed, broadcasting: 3 I1027 11:13:33.237657 7 log.go:181] (0xc000e1dd90) (0xc000695f40) Stream removed, broadcasting: 5 Oct 27 11:13:33.237: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 27 11:13:33.237: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.237: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.263864 7 log.go:181] (0xc0006b9ef0) (0xc00424de00) Create stream I1027 11:13:33.263906 7 log.go:181] (0xc0006b9ef0) (0xc00424de00) Stream added, broadcasting: 1 I1027 11:13:33.267174 7 log.go:181] (0xc0006b9ef0) Reply frame received for 1 I1027 11:13:33.267209 7 log.go:181] (0xc0006b9ef0) (0xc00424dea0) Create stream I1027 11:13:33.267227 7 log.go:181] (0xc0006b9ef0) (0xc00424dea0) Stream added, broadcasting: 3 I1027 11:13:33.270853 7 log.go:181] (0xc0006b9ef0) Reply frame received for 3 I1027 11:13:33.270882 7 log.go:181] (0xc0006b9ef0) (0xc00048d400) Create stream I1027 11:13:33.270890 7 log.go:181] (0xc0006b9ef0) (0xc00048d400) Stream added, broadcasting: 5 I1027 11:13:33.272639 7 log.go:181] (0xc0006b9ef0) Reply frame received for 5 I1027 11:13:33.333658 7 log.go:181] (0xc0006b9ef0) Data frame received for 5 I1027 11:13:33.333685 7 log.go:181] (0xc00048d400) (5) Data frame handling I1027 11:13:33.333701 7 log.go:181] (0xc0006b9ef0) Data frame received for 3 I1027 11:13:33.333706 7 log.go:181] (0xc00424dea0) (3) Data frame handling I1027 11:13:33.333721 7 log.go:181] (0xc00424dea0) (3) Data frame sent I1027 11:13:33.333874 7 log.go:181] (0xc0006b9ef0) Data frame received for 3 I1027 11:13:33.333897 7 log.go:181] (0xc00424dea0) (3) Data frame handling I1027 11:13:33.338463 7 log.go:181] (0xc0006b9ef0) Data frame received for 1 I1027 11:13:33.338524 7 log.go:181] (0xc00424de00) (1) Data frame handling I1027 11:13:33.338550 7 log.go:181] (0xc00424de00) (1) Data frame sent I1027 11:13:33.338569 7 log.go:181] (0xc0006b9ef0) (0xc00424de00) Stream removed, broadcasting: 1 I1027 11:13:33.338584 7 log.go:181] (0xc0006b9ef0) Go away received I1027 11:13:33.338709 7 log.go:181] (0xc0006b9ef0) (0xc00424de00) Stream removed, broadcasting: 1 I1027 11:13:33.338738 7 log.go:181] (0xc0006b9ef0) (0xc00424dea0) Stream removed, broadcasting: 3 I1027 11:13:33.338762 7 log.go:181] (0xc0006b9ef0) (0xc00048d400) Stream removed, broadcasting: 5 Oct 27 11:13:33.338: INFO: Exec stderr: "" Oct 27 11:13:33.338: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.338: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.367562 7 log.go:181] (0xc0005951e0) (0xc003e23a40) Create stream I1027 11:13:33.367586 7 log.go:181] (0xc0005951e0) (0xc003e23a40) Stream added, broadcasting: 1 I1027 11:13:33.369517 7 log.go:181] (0xc0005951e0) Reply frame received for 1 I1027 11:13:33.369555 7 log.go:181] (0xc0005951e0) (0xc00081a000) Create stream I1027 11:13:33.369568 7 log.go:181] (0xc0005951e0) (0xc00081a000) Stream added, broadcasting: 3 I1027 11:13:33.370578 7 log.go:181] (0xc0005951e0) Reply frame received for 3 I1027 11:13:33.370637 7 log.go:181] (0xc0005951e0) (0xc0028e80a0) Create stream I1027 11:13:33.370664 7 log.go:181] (0xc0005951e0) (0xc0028e80a0) Stream added, broadcasting: 5 I1027 11:13:33.371867 7 log.go:181] (0xc0005951e0) Reply frame received for 5 I1027 11:13:33.431603 7 log.go:181] (0xc0005951e0) Data frame received for 5 I1027 11:13:33.431650 7 log.go:181] (0xc0028e80a0) (5) Data frame handling I1027 11:13:33.431675 7 log.go:181] (0xc0005951e0) Data frame received for 3 I1027 11:13:33.431690 7 log.go:181] (0xc00081a000) (3) Data frame handling I1027 11:13:33.431705 7 log.go:181] (0xc00081a000) (3) Data frame sent I1027 11:13:33.431724 7 log.go:181] (0xc0005951e0) Data frame received for 3 I1027 11:13:33.431737 7 log.go:181] (0xc00081a000) (3) Data frame handling I1027 11:13:33.433042 7 log.go:181] (0xc0005951e0) Data frame received for 1 I1027 11:13:33.433096 7 log.go:181] (0xc003e23a40) (1) Data frame handling I1027 11:13:33.433130 7 log.go:181] (0xc003e23a40) (1) Data frame sent I1027 11:13:33.433156 7 log.go:181] (0xc0005951e0) (0xc003e23a40) Stream removed, broadcasting: 1 I1027 11:13:33.433183 7 log.go:181] (0xc0005951e0) Go away received I1027 11:13:33.433333 7 log.go:181] (0xc0005951e0) (0xc003e23a40) Stream removed, broadcasting: 1 I1027 11:13:33.433365 7 log.go:181] (0xc0005951e0) (0xc00081a000) Stream removed, broadcasting: 3 I1027 11:13:33.433382 7 log.go:181] (0xc0005951e0) (0xc0028e80a0) Stream removed, broadcasting: 5 Oct 27 11:13:33.433: INFO: Exec stderr: "" Oct 27 11:13:33.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.433: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.468199 7 log.go:181] (0xc000029340) (0xc00081a500) Create stream I1027 11:13:33.468252 7 log.go:181] (0xc000029340) (0xc00081a500) Stream added, broadcasting: 1 I1027 11:13:33.470969 7 log.go:181] (0xc000029340) Reply frame received for 1 I1027 11:13:33.471043 7 log.go:181] (0xc000029340) (0xc00081a640) Create stream I1027 11:13:33.471096 7 log.go:181] (0xc000029340) (0xc00081a640) Stream added, broadcasting: 3 I1027 11:13:33.472524 7 log.go:181] (0xc000029340) Reply frame received for 3 I1027 11:13:33.472546 7 log.go:181] (0xc000029340) (0xc00048d4a0) Create stream I1027 11:13:33.472554 7 log.go:181] (0xc000029340) (0xc00048d4a0) Stream added, broadcasting: 5 I1027 11:13:33.473701 7 log.go:181] (0xc000029340) Reply frame received for 5 I1027 11:13:33.546494 7 log.go:181] (0xc000029340) Data frame received for 3 I1027 11:13:33.546542 7 log.go:181] (0xc00081a640) (3) Data frame handling I1027 11:13:33.546567 7 log.go:181] (0xc00081a640) (3) Data frame sent I1027 11:13:33.546583 7 log.go:181] (0xc000029340) Data frame received for 3 I1027 11:13:33.546597 7 log.go:181] (0xc00081a640) (3) Data frame handling I1027 11:13:33.546642 7 log.go:181] (0xc000029340) Data frame received for 5 I1027 11:13:33.546669 7 log.go:181] (0xc00048d4a0) (5) Data frame handling I1027 11:13:33.548152 7 log.go:181] (0xc000029340) Data frame received for 1 I1027 11:13:33.548177 7 log.go:181] (0xc00081a500) (1) Data frame handling I1027 11:13:33.548199 7 log.go:181] (0xc00081a500) (1) Data frame sent I1027 11:13:33.548213 7 log.go:181] (0xc000029340) (0xc00081a500) Stream removed, broadcasting: 1 I1027 11:13:33.548309 7 log.go:181] (0xc000029340) Go away received I1027 11:13:33.548352 7 log.go:181] (0xc000029340) (0xc00081a500) Stream removed, broadcasting: 1 I1027 11:13:33.548385 7 log.go:181] (0xc000029340) (0xc00081a640) Stream removed, broadcasting: 3 I1027 11:13:33.548404 7 log.go:181] (0xc000029340) (0xc00048d4a0) Stream removed, broadcasting: 5 Oct 27 11:13:33.548: INFO: Exec stderr: "" Oct 27 11:13:33.548: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2175 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:13:33.548: INFO: >>> kubeConfig: /root/.kube/config I1027 11:13:33.579938 7 log.go:181] (0xc0005958c0) (0xc003e23f40) Create stream I1027 11:13:33.579969 7 log.go:181] (0xc0005958c0) (0xc003e23f40) Stream added, broadcasting: 1 I1027 11:13:33.582470 7 log.go:181] (0xc0005958c0) Reply frame received for 1 I1027 11:13:33.582530 7 log.go:181] (0xc0005958c0) (0xc00081a780) Create stream I1027 11:13:33.582541 7 log.go:181] (0xc0005958c0) (0xc00081a780) Stream added, broadcasting: 3 I1027 11:13:33.583369 7 log.go:181] (0xc0005958c0) Reply frame received for 3 I1027 11:13:33.583400 7 log.go:181] (0xc0005958c0) (0xc0028e8140) Create stream I1027 11:13:33.583411 7 log.go:181] (0xc0005958c0) (0xc0028e8140) Stream added, broadcasting: 5 I1027 11:13:33.584227 7 log.go:181] (0xc0005958c0) Reply frame received for 5 I1027 11:13:33.645864 7 log.go:181] (0xc0005958c0) Data frame received for 5 I1027 11:13:33.645930 7 log.go:181] (0xc0028e8140) (5) Data frame handling I1027 11:13:33.645973 7 log.go:181] (0xc0005958c0) Data frame received for 3 I1027 11:13:33.645986 7 log.go:181] (0xc00081a780) (3) Data frame handling I1027 11:13:33.646004 7 log.go:181] (0xc00081a780) (3) Data frame sent I1027 11:13:33.646015 7 log.go:181] (0xc0005958c0) Data frame received for 3 I1027 11:13:33.646025 7 log.go:181] (0xc00081a780) (3) Data frame handling I1027 11:13:33.647495 7 log.go:181] (0xc0005958c0) Data frame received for 1 I1027 11:13:33.647545 7 log.go:181] (0xc003e23f40) (1) Data frame handling I1027 11:13:33.647584 7 log.go:181] (0xc003e23f40) (1) Data frame sent I1027 11:13:33.647607 7 log.go:181] (0xc0005958c0) (0xc003e23f40) Stream removed, broadcasting: 1 I1027 11:13:33.647631 7 log.go:181] (0xc0005958c0) Go away received I1027 11:13:33.647745 7 log.go:181] (0xc0005958c0) (0xc003e23f40) Stream removed, broadcasting: 1 I1027 11:13:33.647771 7 log.go:181] (0xc0005958c0) (0xc00081a780) Stream removed, broadcasting: 3 I1027 11:13:33.647790 7 log.go:181] (0xc0005958c0) (0xc0028e8140) Stream removed, broadcasting: 5 Oct 27 11:13:33.647: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:33.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2175" for this suite. • [SLOW TEST:11.314 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2392,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:33.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 27 11:13:33.853: INFO: Waiting up to 5m0s for pod "pod-1c916b01-bc20-48c9-a786-d9c3751c5427" in namespace "emptydir-6499" to be "Succeeded or Failed" Oct 27 11:13:33.875: INFO: Pod "pod-1c916b01-bc20-48c9-a786-d9c3751c5427": Phase="Pending", Reason="", readiness=false. Elapsed: 21.958448ms Oct 27 11:13:35.919: INFO: Pod "pod-1c916b01-bc20-48c9-a786-d9c3751c5427": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06622858s Oct 27 11:13:37.924: INFO: Pod "pod-1c916b01-bc20-48c9-a786-d9c3751c5427": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070727734s STEP: Saw pod success Oct 27 11:13:37.924: INFO: Pod "pod-1c916b01-bc20-48c9-a786-d9c3751c5427" satisfied condition "Succeeded or Failed" Oct 27 11:13:37.927: INFO: Trying to get logs from node kali-worker2 pod pod-1c916b01-bc20-48c9-a786-d9c3751c5427 container test-container: STEP: delete the pod Oct 27 11:13:37.982: INFO: Waiting for pod pod-1c916b01-bc20-48c9-a786-d9c3751c5427 to disappear Oct 27 11:13:37.995: INFO: Pod pod-1c916b01-bc20-48c9-a786-d9c3751c5427 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:37.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6499" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2403,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:38.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 27 11:13:38.075: INFO: Waiting up to 5m0s for pod "pod-35a9786e-9be4-4b37-bdff-f943fa67813c" in namespace "emptydir-1480" to be "Succeeded or Failed" Oct 27 11:13:38.079: INFO: Pod "pod-35a9786e-9be4-4b37-bdff-f943fa67813c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501708ms Oct 27 11:13:40.190: INFO: Pod "pod-35a9786e-9be4-4b37-bdff-f943fa67813c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114910875s Oct 27 11:13:42.194: INFO: Pod "pod-35a9786e-9be4-4b37-bdff-f943fa67813c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119397825s STEP: Saw pod success Oct 27 11:13:42.194: INFO: Pod "pod-35a9786e-9be4-4b37-bdff-f943fa67813c" satisfied condition "Succeeded or Failed" Oct 27 11:13:42.203: INFO: Trying to get logs from node kali-worker2 pod pod-35a9786e-9be4-4b37-bdff-f943fa67813c container test-container: STEP: delete the pod Oct 27 11:13:42.277: INFO: Waiting for pod pod-35a9786e-9be4-4b37-bdff-f943fa67813c to disappear Oct 27 11:13:42.289: INFO: Pod pod-35a9786e-9be4-4b37-bdff-f943fa67813c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:42.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1480" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":139,"skipped":2405,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:42.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Oct 27 11:13:42.399: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Oct 27 11:13:42.924: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 27 11:13:45.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394023, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:13:47.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394023, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394022, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:13:49.916: INFO: Waited 723.216213ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:50.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4649" for this suite. • [SLOW TEST:8.701 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":140,"skipped":2410,"failed":0} [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:50.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 27 11:13:51.379: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:13:51.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6920" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":141,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:13:51.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:13:52.074: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:13:54.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394032, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394032, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394032, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394032, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:13:57.141: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 27 11:14:01.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config attach --namespace=webhook-5914 to-be-attached-pod -i -c=container1' Oct 27 11:14:01.319: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:14:01.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5914" for this suite. STEP: Destroying namespace "webhook-5914-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.964 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":142,"skipped":2492,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:14:01.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-db188f8d-7661-4ad3-a8b7-a6a25ee5a2ef in namespace container-probe-1727 Oct 27 11:14:05.589: INFO: Started pod busybox-db188f8d-7661-4ad3-a8b7-a6a25ee5a2ef in namespace container-probe-1727 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 11:14:05.592: INFO: Initial restart count of pod busybox-db188f8d-7661-4ad3-a8b7-a6a25ee5a2ef is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:06.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1727" for this suite. • [SLOW TEST:244.927 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2495,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:06.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:18:06.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b" in namespace "downward-api-9284" to be "Succeeded or Failed" Oct 27 11:18:06.474: INFO: Pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022187ms Oct 27 11:18:08.479: INFO: Pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008334722s Oct 27 11:18:10.483: INFO: Pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012803906s Oct 27 11:18:12.489: INFO: Pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018498811s STEP: Saw pod success Oct 27 11:18:12.489: INFO: Pod "downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b" satisfied condition "Succeeded or Failed" Oct 27 11:18:12.492: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b container client-container: STEP: delete the pod Oct 27 11:18:12.537: INFO: Waiting for pod downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b to disappear Oct 27 11:18:12.547: INFO: Pod downwardapi-volume-23f5536b-3fdd-4d0c-81d7-7e93ea09362b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:12.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9284" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2496,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:12.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:18:12.622: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 27 11:18:14.669: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:14.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1860" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":145,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:14.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 27 11:18:15.022: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 27 11:18:15.025: INFO: starting watch STEP: patching STEP: updating Oct 27 11:18:15.036: INFO: waiting for watch events with expected annotations Oct 27 11:18:15.037: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:15.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-5347" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":146,"skipped":2539,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:16.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-7056 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7056 to expose endpoints map[] Oct 27 11:18:16.851: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Oct 27 11:18:17.886: INFO: successfully validated that service multi-endpoint-test in namespace services-7056 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7056 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7056 to expose endpoints map[pod1:[100]] Oct 27 11:18:22.014: INFO: successfully validated that service multi-endpoint-test in namespace services-7056 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7056 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7056 to expose endpoints map[pod1:[100] pod2:[101]] Oct 27 11:18:26.120: INFO: successfully validated that service multi-endpoint-test in namespace services-7056 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7056 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7056 to expose endpoints map[pod2:[101]] Oct 27 11:18:26.229: INFO: successfully validated that service multi-endpoint-test in namespace services-7056 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7056 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7056 to expose endpoints map[] Oct 27 11:18:27.256: INFO: successfully validated that service multi-endpoint-test in namespace services-7056 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:27.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7056" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.304 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":147,"skipped":2548,"failed":0} SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:27.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7a2f04b6-7041-4ebf-974d-b57c06607ced STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7a2f04b6-7041-4ebf-974d-b57c06607ced STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:33.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-11" for this suite. • [SLOW TEST:6.515 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2550,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:33.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2a3bc196-8947-4951-ab5e-073323b01d66 STEP: Creating a pod to test consume configMaps Oct 27 11:18:33.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4" in namespace "configmap-7587" to be "Succeeded or Failed" Oct 27 11:18:34.011: INFO: Pod "pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.035976ms Oct 27 11:18:36.016: INFO: Pod "pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024890689s Oct 27 11:18:38.020: INFO: Pod "pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028814578s STEP: Saw pod success Oct 27 11:18:38.020: INFO: Pod "pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4" satisfied condition "Succeeded or Failed" Oct 27 11:18:38.022: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4 container configmap-volume-test: STEP: delete the pod Oct 27 11:18:38.080: INFO: Waiting for pod pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4 to disappear Oct 27 11:18:38.187: INFO: Pod pod-configmaps-f4c08944-e2f0-4702-be6a-28b505da5fc4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7587" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2557,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:38.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 27 11:18:38.352: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 27 11:18:38.360: INFO: Waiting for terminating namespaces to be deleted... Oct 27 11:18:38.363: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 27 11:18:38.374: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 11:18:38.374: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 11:18:38.374: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 11:18:38.374: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 11:18:38.374: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 27 11:18:38.378: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 11:18:38.378: INFO: Container kindnet-cni ready: true, restart count 0 Oct 27 11:18:38.378: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 27 11:18:38.378: INFO: Container kube-proxy ready: true, restart count 0 Oct 27 11:18:38.378: INFO: pod-projected-configmaps-c12ef07e-6e6a-4d76-ac76-485ff955bbfe from projected-11 started at 2020-10-27 11:18:27 +0000 UTC (1 container statuses recorded) Oct 27 11:18:38.378: INFO: Container projected-configmap-volume-test ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Oct 27 11:18:38.488: INFO: Pod kindnet-pdv4j requesting resource cpu=100m on Node kali-worker Oct 27 11:18:38.488: INFO: Pod kindnet-pgjc7 requesting resource cpu=100m on Node kali-worker2 Oct 27 11:18:38.488: INFO: Pod kube-proxy-qhsmg requesting resource cpu=0m on Node kali-worker2 Oct 27 11:18:38.488: INFO: Pod kube-proxy-qsqz8 requesting resource cpu=0m on Node kali-worker Oct 27 11:18:38.488: INFO: Pod pod-projected-configmaps-c12ef07e-6e6a-4d76-ac76-485ff955bbfe requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Oct 27 11:18:38.488: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Oct 27 11:18:38.495: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd.1641d558e3b0cb9f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623.1641d558f25b7b1b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623.1641d55944213ea4], Reason = [Created], Message = [Created container filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623] STEP: Considering event: Type = [Normal], Name = [filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd.1641d55936dec57c], Reason = [Created], Message = [Created container filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd] STEP: Considering event: Type = [Normal], Name = [filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd.1641d558867cf2d3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6416/filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623.1641d558884cec5e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6416/filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623.1641d559526f7fba], Reason = [Started], Message = [Started container filler-pod-5936e5bc-c4cf-42c3-ac07-38872ab62623] STEP: Considering event: Type = [Normal], Name = [filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd.1641d5594d0f288c], Reason = [Started], Message = [Started container filler-pod-ab528b2b-7835-4b6a-a522-bd066a2283fd] STEP: Considering event: Type = [Warning], Name = [additional-pod.1641d559efa63c6b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1641d559f2252a9d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:45.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6416" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.495 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":150,"skipped":2566,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:45.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:45.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-145" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":151,"skipped":2575,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:45.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:49.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1588" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2593,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:49.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 27 11:18:50.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-459' Oct 27 11:18:50.408: INFO: stderr: "" Oct 27 11:18:50.408: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 27 11:18:51.589: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:51.589: INFO: Found 0 / 1 Oct 27 11:18:52.463: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:52.463: INFO: Found 0 / 1 Oct 27 11:18:53.413: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:53.413: INFO: Found 0 / 1 Oct 27 11:18:54.413: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:54.413: INFO: Found 1 / 1 Oct 27 11:18:54.413: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 27 11:18:54.416: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:54.416: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 27 11:18:54.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config patch pod agnhost-primary-fjn85 --namespace=kubectl-459 -p {"metadata":{"annotations":{"x":"y"}}}' Oct 27 11:18:54.527: INFO: stderr: "" Oct 27 11:18:54.527: INFO: stdout: "pod/agnhost-primary-fjn85 patched\n" STEP: checking annotations Oct 27 11:18:54.546: INFO: Selector matched 1 pods for map[app:agnhost] Oct 27 11:18:54.546: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:18:54.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-459" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":153,"skipped":2603,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:18:54.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:18:54.660: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 27 11:18:54.667: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:54.683: INFO: Number of nodes with available pods: 0 Oct 27 11:18:54.683: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:18:55.689: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:55.692: INFO: Number of nodes with available pods: 0 Oct 27 11:18:55.692: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:18:56.688: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:56.692: INFO: Number of nodes with available pods: 0 Oct 27 11:18:56.692: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:18:57.775: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:57.778: INFO: Number of nodes with available pods: 0 Oct 27 11:18:57.778: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:18:58.689: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:58.693: INFO: Number of nodes with available pods: 0 Oct 27 11:18:58.693: INFO: Node kali-worker is running more than one daemon pod Oct 27 11:18:59.715: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:18:59.719: INFO: Number of nodes with available pods: 1 Oct 27 11:18:59.719: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:19:00.697: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:00.701: INFO: Number of nodes with available pods: 2 Oct 27 11:19:00.701: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 27 11:19:00.870: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:00.870: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:00.874: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:01.918: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:01.918: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:01.923: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:02.879: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:02.879: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:02.883: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:03.880: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:03.880: INFO: Pod daemon-set-k7q77 is not available Oct 27 11:19:03.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:03.885: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:04.880: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:04.880: INFO: Pod daemon-set-k7q77 is not available Oct 27 11:19:04.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:04.885: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:05.880: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:05.880: INFO: Pod daemon-set-k7q77 is not available Oct 27 11:19:05.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:05.884: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:06.879: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:06.880: INFO: Pod daemon-set-k7q77 is not available Oct 27 11:19:06.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:06.884: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:07.879: INFO: Wrong image for pod: daemon-set-k7q77. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:07.879: INFO: Pod daemon-set-k7q77 is not available Oct 27 11:19:07.879: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:07.882: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:08.879: INFO: Pod daemon-set-8l2ks is not available Oct 27 11:19:08.879: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:08.884: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:09.878: INFO: Pod daemon-set-8l2ks is not available Oct 27 11:19:09.878: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:09.881: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:11.068: INFO: Pod daemon-set-8l2ks is not available Oct 27 11:19:11.068: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:11.072: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:11.879: INFO: Pod daemon-set-8l2ks is not available Oct 27 11:19:11.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:11.885: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:12.909: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:12.913: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:13.880: INFO: Wrong image for pod: daemon-set-s4sqb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 27 11:19:13.880: INFO: Pod daemon-set-s4sqb is not available Oct 27 11:19:13.885: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:14.883: INFO: Pod daemon-set-tl4k4 is not available Oct 27 11:19:14.888: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 27 11:19:14.898: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:14.912: INFO: Number of nodes with available pods: 1 Oct 27 11:19:14.912: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:19:15.918: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:15.925: INFO: Number of nodes with available pods: 1 Oct 27 11:19:15.925: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:19:16.918: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:16.921: INFO: Number of nodes with available pods: 1 Oct 27 11:19:16.921: INFO: Node kali-worker2 is running more than one daemon pod Oct 27 11:19:17.918: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 27 11:19:17.923: INFO: Number of nodes with available pods: 2 Oct 27 11:19:17.923: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9496, will wait for the garbage collector to delete the pods Oct 27 11:19:17.996: INFO: Deleting DaemonSet.extensions daemon-set took: 7.631948ms Oct 27 11:19:18.396: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.258443ms Oct 27 11:19:28.700: INFO: Number of nodes with available pods: 0 Oct 27 11:19:28.700: INFO: Number of running nodes: 0, number of available pods: 0 Oct 27 11:19:28.703: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9496/daemonsets","resourceVersion":"8971776"},"items":null} Oct 27 11:19:28.706: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9496/pods","resourceVersion":"8971776"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:28.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9496" for this suite. • [SLOW TEST:34.170 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":154,"skipped":2610,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:28.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:34.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2601" for this suite. • [SLOW TEST:6.107 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2613,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:34.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6814/secret-test-c84b3cf7-bd1d-4bbf-b72e-7c0555e5d069 STEP: Creating a pod to test consume secrets Oct 27 11:19:34.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4" in namespace "secrets-6814" to be "Succeeded or Failed" Oct 27 11:19:34.944: INFO: Pod "pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.330532ms Oct 27 11:19:36.952: INFO: Pod "pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031067003s Oct 27 11:19:38.956: INFO: Pod "pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034633014s STEP: Saw pod success Oct 27 11:19:38.956: INFO: Pod "pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4" satisfied condition "Succeeded or Failed" Oct 27 11:19:38.958: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4 container env-test: STEP: delete the pod Oct 27 11:19:39.110: INFO: Waiting for pod pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4 to disappear Oct 27 11:19:39.118: INFO: Pod pod-configmaps-af62fdcc-a2fe-4c71-8cc3-f876b4bcf3e4 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:39.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6814" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2615,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:39.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-cd028e5b-34ce-425b-b23c-4efdcab703eb [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:39.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4264" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":157,"skipped":2616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:39.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1523/configmap-test-fe916a97-027a-40c2-b218-64f026aba59a STEP: Creating a pod to test consume configMaps Oct 27 11:19:39.528: INFO: Waiting up to 5m0s for pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396" in namespace "configmap-1523" to be "Succeeded or Failed" Oct 27 11:19:39.557: INFO: Pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396": Phase="Pending", Reason="", readiness=false. Elapsed: 28.631067ms Oct 27 11:19:41.673: INFO: Pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145175771s Oct 27 11:19:43.679: INFO: Pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396": Phase="Running", Reason="", readiness=true. Elapsed: 4.15041862s Oct 27 11:19:45.683: INFO: Pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155044216s STEP: Saw pod success Oct 27 11:19:45.683: INFO: Pod "pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396" satisfied condition "Succeeded or Failed" Oct 27 11:19:45.686: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396 container env-test: STEP: delete the pod Oct 27 11:19:45.720: INFO: Waiting for pod pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396 to disappear Oct 27 11:19:45.754: INFO: Pod pod-configmaps-55f40460-54ac-40b3-b9e1-f3ee5aaec396 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:45.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1523" for this suite. • [SLOW TEST:6.357 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2640,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:45.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 27 11:19:51.834: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4717 PodName:pod-sharedvolume-e95d108d-3223-4506-ad26-dd90b6936d29 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:19:51.834: INFO: >>> kubeConfig: /root/.kube/config I1027 11:19:51.864104 7 log.go:181] (0xc0005ee160) (0xc00424c6e0) Create stream I1027 11:19:51.864131 7 log.go:181] (0xc0005ee160) (0xc00424c6e0) Stream added, broadcasting: 1 I1027 11:19:51.866641 7 log.go:181] (0xc0005ee160) Reply frame received for 1 I1027 11:19:51.866686 7 log.go:181] (0xc0005ee160) (0xc000694460) Create stream I1027 11:19:51.866707 7 log.go:181] (0xc0005ee160) (0xc000694460) Stream added, broadcasting: 3 I1027 11:19:51.867791 7 log.go:181] (0xc0005ee160) Reply frame received for 3 I1027 11:19:51.867841 7 log.go:181] (0xc0005ee160) (0xc002118500) Create stream I1027 11:19:51.867857 7 log.go:181] (0xc0005ee160) (0xc002118500) Stream added, broadcasting: 5 I1027 11:19:51.868795 7 log.go:181] (0xc0005ee160) Reply frame received for 5 I1027 11:19:51.959545 7 log.go:181] (0xc0005ee160) Data frame received for 5 I1027 11:19:51.959622 7 log.go:181] (0xc002118500) (5) Data frame handling I1027 11:19:51.959667 7 log.go:181] (0xc0005ee160) Data frame received for 3 I1027 11:19:51.959693 7 log.go:181] (0xc000694460) (3) Data frame handling I1027 11:19:51.959748 7 log.go:181] (0xc000694460) (3) Data frame sent I1027 11:19:51.959787 7 log.go:181] (0xc0005ee160) Data frame received for 3 I1027 11:19:51.959811 7 log.go:181] (0xc000694460) (3) Data frame handling I1027 11:19:51.961283 7 log.go:181] (0xc0005ee160) Data frame received for 1 I1027 11:19:51.961304 7 log.go:181] (0xc00424c6e0) (1) Data frame handling I1027 11:19:51.961320 7 log.go:181] (0xc00424c6e0) (1) Data frame sent I1027 11:19:51.961333 7 log.go:181] (0xc0005ee160) (0xc00424c6e0) Stream removed, broadcasting: 1 I1027 11:19:51.961353 7 log.go:181] (0xc0005ee160) Go away received I1027 11:19:51.961557 7 log.go:181] (0xc0005ee160) (0xc00424c6e0) Stream removed, broadcasting: 1 I1027 11:19:51.961593 7 log.go:181] (0xc0005ee160) (0xc000694460) Stream removed, broadcasting: 3 I1027 11:19:51.961605 7 log.go:181] (0xc0005ee160) (0xc002118500) Stream removed, broadcasting: 5 Oct 27 11:19:51.961: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:19:51.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4717" for this suite. • [SLOW TEST:6.207 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":159,"skipped":2640,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:19:51.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:20:52.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4676" for this suite. • [SLOW TEST:60.184 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2641,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:20:52.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:20:52.776: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:20:54.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:20:56.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394452, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:20:59.819: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:20:59.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1500" for this suite. STEP: Destroying namespace "webhook-1500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.950 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":161,"skipped":2650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:00.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:21:00.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a" in namespace "projected-5286" to be "Succeeded or Failed" Oct 27 11:21:00.326: INFO: Pod "downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a": Phase="Pending", Reason="", readiness=false. Elapsed: 110.234442ms Oct 27 11:21:02.427: INFO: Pod "downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211203714s Oct 27 11:21:04.458: INFO: Pod "downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241750464s STEP: Saw pod success Oct 27 11:21:04.458: INFO: Pod "downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a" satisfied condition "Succeeded or Failed" Oct 27 11:21:04.460: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a container client-container: STEP: delete the pod Oct 27 11:21:04.497: INFO: Waiting for pod downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a to disappear Oct 27 11:21:04.507: INFO: Pod downwardapi-volume-73429527-b8fd-4f6b-9119-64f5c2f2962a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5286" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2677,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:04.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:21:04.560: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:08.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-652" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2683,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:08.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:21:09.260: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:21:11.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394469, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394469, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394469, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394469, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:21:14.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:14.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3034" for this suite. STEP: Destroying namespace "webhook-3034-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.933 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":164,"skipped":2683,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:14.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Oct 27 11:21:14.814: INFO: created test-podtemplate-1 Oct 27 11:21:14.859: INFO: created test-podtemplate-2 Oct 27 11:21:14.863: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 27 11:21:14.868: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 27 11:21:14.889: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:14.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-130" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":165,"skipped":2687,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:14.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:21:14.949: INFO: Creating deployment "test-recreate-deployment" Oct 27 11:21:14.997: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 27 11:21:15.005: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 27 11:21:17.153: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 27 11:21:17.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394475, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394475, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394475, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394475, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:21:19.170: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 27 11:21:19.183: INFO: Updating deployment test-recreate-deployment Oct 27 11:21:19.183: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 27 11:21:19.730: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2101 /apis/apps/v1/namespaces/deployment-2101/deployments/test-recreate-deployment 3c467059-856a-420a-af81-fdfb2ebd9c80 8972503 2 2020-10-27 11:21:14 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003a967a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-27 11:21:19 +0000 UTC,LastTransitionTime:2020-10-27 11:21:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-10-27 11:21:19 +0000 UTC,LastTransitionTime:2020-10-27 11:21:15 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 27 11:21:19.824: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-2101 /apis/apps/v1/namespaces/deployment-2101/replicasets/test-recreate-deployment-f79dd4667 349aa0c5-801f-4999-880e-1ae6bed80121 8972501 1 2020-10-27 11:21:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3c467059-856a-420a-af81-fdfb2ebd9c80 0xc003a96ca0 0xc003a96ca1}] [] [{kube-controller-manager Update apps/v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c467059-856a-420a-af81-fdfb2ebd9c80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003a96d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:21:19.824: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 27 11:21:19.825: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-2101 /apis/apps/v1/namespaces/deployment-2101/replicasets/test-recreate-deployment-c96cf48f 842bc5c0-c947-4673-bcb9-10893f0ee25c 8972493 2 2020-10-27 11:21:14 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3c467059-856a-420a-af81-fdfb2ebd9c80 0xc003a96baf 0xc003a96bc0}] [] [{kube-controller-manager Update apps/v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3c467059-856a-420a-af81-fdfb2ebd9c80\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003a96c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:21:19.828: INFO: Pod "test-recreate-deployment-f79dd4667-2dv6q" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-2dv6q test-recreate-deployment-f79dd4667- deployment-2101 /api/v1/namespaces/deployment-2101/pods/test-recreate-deployment-f79dd4667-2dv6q 7b515efc-7e8d-4d5d-82a8-92514df19eec 8972504 0 2020-10-27 11:21:19 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 349aa0c5-801f-4999-880e-1ae6bed80121 0xc003a971e0 0xc003a971e1}] [] [{kube-controller-manager Update v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"349aa0c5-801f-4999-880e-1ae6bed80121\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 11:21:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dc26l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dc26l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dc26l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:21:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:21:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:21:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:21:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-27 11:21:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:19.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2101" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":166,"skipped":2696,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:19.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc Oct 27 11:21:20.218: INFO: Pod name my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc: Found 0 pods out of 1 Oct 27 11:21:25.507: INFO: Pod name my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc: Found 1 pods out of 1 Oct 27 11:21:25.507: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc" are running Oct 27 11:21:25.573: INFO: Pod "my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc-9gpqh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:21:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:21:24 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:21:24 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-27 11:21:20 +0000 UTC Reason: Message:}]) Oct 27 11:21:25.574: INFO: Trying to dial the pod Oct 27 11:21:30.585: INFO: Controller my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc: Got expected result from replica 1 [my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc-9gpqh]: "my-hostname-basic-a44a02e4-9791-495e-9607-f0b0743e15cc-9gpqh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:21:30.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3879" for this suite. • [SLOW TEST:10.716 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":167,"skipped":2705,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:21:30.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 27 11:21:31.437: INFO: Pod name wrapped-volume-race-879d25b0-9ee8-4a89-be8c-bf8608181f4c: Found 0 pods out of 5 Oct 27 11:21:36.447: INFO: Pod name wrapped-volume-race-879d25b0-9ee8-4a89-be8c-bf8608181f4c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-879d25b0-9ee8-4a89-be8c-bf8608181f4c in namespace emptydir-wrapper-8892, will wait for the garbage collector to delete the pods Oct 27 11:21:52.558: INFO: Deleting ReplicationController wrapped-volume-race-879d25b0-9ee8-4a89-be8c-bf8608181f4c took: 8.410766ms Oct 27 11:21:53.058: INFO: Terminating ReplicationController wrapped-volume-race-879d25b0-9ee8-4a89-be8c-bf8608181f4c pods took: 500.147453ms STEP: Creating RC which spawns configmap-volume pods Oct 27 11:22:08.294: INFO: Pod name wrapped-volume-race-a4e8c2a8-7f42-4aba-b6e2-daee62597433: Found 0 pods out of 5 Oct 27 11:22:13.303: INFO: Pod name wrapped-volume-race-a4e8c2a8-7f42-4aba-b6e2-daee62597433: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a4e8c2a8-7f42-4aba-b6e2-daee62597433 in namespace emptydir-wrapper-8892, will wait for the garbage collector to delete the pods Oct 27 11:22:29.426: INFO: Deleting ReplicationController wrapped-volume-race-a4e8c2a8-7f42-4aba-b6e2-daee62597433 took: 42.127637ms Oct 27 11:22:29.927: INFO: Terminating ReplicationController wrapped-volume-race-a4e8c2a8-7f42-4aba-b6e2-daee62597433 pods took: 500.257338ms STEP: Creating RC which spawns configmap-volume pods Oct 27 11:22:38.884: INFO: Pod name wrapped-volume-race-1bb6dbbb-4b5c-4229-afe0-f561bbf0b26c: Found 0 pods out of 5 Oct 27 11:22:43.894: INFO: Pod name wrapped-volume-race-1bb6dbbb-4b5c-4229-afe0-f561bbf0b26c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1bb6dbbb-4b5c-4229-afe0-f561bbf0b26c in namespace emptydir-wrapper-8892, will wait for the garbage collector to delete the pods Oct 27 11:23:00.000: INFO: Deleting ReplicationController wrapped-volume-race-1bb6dbbb-4b5c-4229-afe0-f561bbf0b26c took: 7.192216ms Oct 27 11:23:00.500: INFO: Terminating ReplicationController wrapped-volume-race-1bb6dbbb-4b5c-4229-afe0-f561bbf0b26c pods took: 500.228175ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:09.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8892" for this suite. • [SLOW TEST:98.774 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":168,"skipped":2706,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:09.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:13.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6028" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2719,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:13.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 27 11:23:20.194: INFO: Successfully updated pod "annotationupdate499c19a2-8573-4ea7-8ece-f43acd6e33b3" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:22.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9169" for this suite. • [SLOW TEST:8.738 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2727,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:22.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:22.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5115" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":171,"skipped":2747,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:22.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:23:22.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c" in namespace "downward-api-8592" to be "Succeeded or Failed" Oct 27 11:23:22.506: INFO: Pod "downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.585483ms Oct 27 11:23:24.510: INFO: Pod "downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007693516s Oct 27 11:23:26.516: INFO: Pod "downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013113377s STEP: Saw pod success Oct 27 11:23:26.516: INFO: Pod "downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c" satisfied condition "Succeeded or Failed" Oct 27 11:23:26.519: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c container client-container: STEP: delete the pod Oct 27 11:23:26.737: INFO: Waiting for pod downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c to disappear Oct 27 11:23:26.758: INFO: Pod downwardapi-volume-7082443d-07f7-4ab2-888e-8d83677edf0c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8592" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2750,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:26.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 27 11:23:31.330: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:31.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1114" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2757,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:31.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:23:31.519: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1699 I1027 11:23:31.533570 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1699, replica count: 1 I1027 11:23:32.583947 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:23:33.584151 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:23:34.584451 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:23:34.710: INFO: Created: latency-svc-9dm4g Oct 27 11:23:34.725: INFO: Got endpoints: latency-svc-9dm4g [41.035775ms] Oct 27 11:23:34.764: INFO: Created: latency-svc-hsbbk Oct 27 11:23:34.795: INFO: Got endpoints: latency-svc-hsbbk [69.874094ms] Oct 27 11:23:34.839: INFO: Created: latency-svc-wqswl Oct 27 11:23:34.851: INFO: Got endpoints: latency-svc-wqswl [125.214829ms] Oct 27 11:23:34.869: INFO: Created: latency-svc-9zh5j Oct 27 11:23:34.890: INFO: Got endpoints: latency-svc-9zh5j [165.090238ms] Oct 27 11:23:34.958: INFO: Created: latency-svc-t7ldh Oct 27 11:23:34.962: INFO: Got endpoints: latency-svc-t7ldh [236.417242ms] Oct 27 11:23:35.031: INFO: Created: latency-svc-jsxfz Oct 27 11:23:35.043: INFO: Got endpoints: latency-svc-jsxfz [317.614798ms] Oct 27 11:23:35.127: INFO: Created: latency-svc-f2xqr Oct 27 11:23:35.138: INFO: Got endpoints: latency-svc-f2xqr [412.883271ms] Oct 27 11:23:35.154: INFO: Created: latency-svc-84p95 Oct 27 11:23:35.162: INFO: Got endpoints: latency-svc-84p95 [436.876554ms] Oct 27 11:23:35.223: INFO: Created: latency-svc-htpms Oct 27 11:23:35.275: INFO: Got endpoints: latency-svc-htpms [549.251812ms] Oct 27 11:23:35.295: INFO: Created: latency-svc-hxwv2 Oct 27 11:23:35.307: INFO: Got endpoints: latency-svc-hxwv2 [581.800007ms] Oct 27 11:23:35.353: INFO: Created: latency-svc-4grkp Oct 27 11:23:35.367: INFO: Got endpoints: latency-svc-4grkp [641.63944ms] Oct 27 11:23:35.427: INFO: Created: latency-svc-vnr8m Oct 27 11:23:35.475: INFO: Got endpoints: latency-svc-vnr8m [749.350579ms] Oct 27 11:23:35.594: INFO: Created: latency-svc-9hc8c Oct 27 11:23:35.638: INFO: Got endpoints: latency-svc-9hc8c [911.950369ms] Oct 27 11:23:35.691: INFO: Created: latency-svc-797c5 Oct 27 11:23:35.742: INFO: Got endpoints: latency-svc-797c5 [1.016520833s] Oct 27 11:23:35.779: INFO: Created: latency-svc-nrgz7 Oct 27 11:23:35.789: INFO: Got endpoints: latency-svc-nrgz7 [1.063171635s] Oct 27 11:23:35.829: INFO: Created: latency-svc-f86jq Oct 27 11:23:35.891: INFO: Got endpoints: latency-svc-f86jq [1.165934313s] Oct 27 11:23:35.895: INFO: Created: latency-svc-xb7r9 Oct 27 11:23:35.902: INFO: Got endpoints: latency-svc-xb7r9 [1.10697444s] Oct 27 11:23:35.928: INFO: Created: latency-svc-8znjr Oct 27 11:23:35.939: INFO: Got endpoints: latency-svc-8znjr [1.088438115s] Oct 27 11:23:35.958: INFO: Created: latency-svc-knncx Oct 27 11:23:35.973: INFO: Got endpoints: latency-svc-knncx [1.08234046s] Oct 27 11:23:36.034: INFO: Created: latency-svc-4hllz Oct 27 11:23:36.048: INFO: Got endpoints: latency-svc-4hllz [1.085695761s] Oct 27 11:23:36.069: INFO: Created: latency-svc-5mwj9 Oct 27 11:23:36.085: INFO: Got endpoints: latency-svc-5mwj9 [1.042010158s] Oct 27 11:23:36.167: INFO: Created: latency-svc-fcvw5 Oct 27 11:23:36.171: INFO: Got endpoints: latency-svc-fcvw5 [1.032597486s] Oct 27 11:23:36.244: INFO: Created: latency-svc-ccw2h Oct 27 11:23:36.266: INFO: Got endpoints: latency-svc-ccw2h [1.103121775s] Oct 27 11:23:36.317: INFO: Created: latency-svc-28b4d Oct 27 11:23:36.325: INFO: Got endpoints: latency-svc-28b4d [1.05010244s] Oct 27 11:23:36.342: INFO: Created: latency-svc-pqfsb Oct 27 11:23:36.355: INFO: Got endpoints: latency-svc-pqfsb [1.047524924s] Oct 27 11:23:36.393: INFO: Created: latency-svc-xnpqh Oct 27 11:23:36.403: INFO: Got endpoints: latency-svc-xnpqh [1.036149999s] Oct 27 11:23:36.454: INFO: Created: latency-svc-wzpb2 Oct 27 11:23:36.483: INFO: Got endpoints: latency-svc-wzpb2 [1.008477652s] Oct 27 11:23:36.483: INFO: Created: latency-svc-hqtjb Oct 27 11:23:36.494: INFO: Got endpoints: latency-svc-hqtjb [856.20188ms] Oct 27 11:23:36.534: INFO: Created: latency-svc-sqmrm Oct 27 11:23:36.548: INFO: Got endpoints: latency-svc-sqmrm [806.161119ms] Oct 27 11:23:36.633: INFO: Created: latency-svc-45bq8 Oct 27 11:23:36.646: INFO: Got endpoints: latency-svc-45bq8 [857.069089ms] Oct 27 11:23:36.687: INFO: Created: latency-svc-8fzwx Oct 27 11:23:36.760: INFO: Got endpoints: latency-svc-8fzwx [869.16433ms] Oct 27 11:23:36.786: INFO: Created: latency-svc-hdkcz Oct 27 11:23:36.801: INFO: Got endpoints: latency-svc-hdkcz [898.52175ms] Oct 27 11:23:36.822: INFO: Created: latency-svc-cksbq Oct 27 11:23:36.837: INFO: Got endpoints: latency-svc-cksbq [898.128135ms] Oct 27 11:23:36.924: INFO: Created: latency-svc-6qds5 Oct 27 11:23:36.936: INFO: Got endpoints: latency-svc-6qds5 [962.539955ms] Oct 27 11:23:36.954: INFO: Created: latency-svc-wdtmj Oct 27 11:23:36.976: INFO: Got endpoints: latency-svc-wdtmj [928.049839ms] Oct 27 11:23:37.102: INFO: Created: latency-svc-hlrnh Oct 27 11:23:37.114: INFO: Got endpoints: latency-svc-hlrnh [1.028892268s] Oct 27 11:23:37.152: INFO: Created: latency-svc-gc2nc Oct 27 11:23:37.162: INFO: Got endpoints: latency-svc-gc2nc [991.496041ms] Oct 27 11:23:37.233: INFO: Created: latency-svc-mztxt Oct 27 11:23:37.251: INFO: Got endpoints: latency-svc-mztxt [985.056827ms] Oct 27 11:23:37.293: INFO: Created: latency-svc-6svkf Oct 27 11:23:37.307: INFO: Got endpoints: latency-svc-6svkf [982.42776ms] Oct 27 11:23:37.377: INFO: Created: latency-svc-m6vjz Oct 27 11:23:37.381: INFO: Got endpoints: latency-svc-m6vjz [1.025611366s] Oct 27 11:23:37.436: INFO: Created: latency-svc-wbd4c Oct 27 11:23:37.458: INFO: Got endpoints: latency-svc-wbd4c [1.054273449s] Oct 27 11:23:37.563: INFO: Created: latency-svc-58m5f Oct 27 11:23:37.572: INFO: Got endpoints: latency-svc-58m5f [1.088329735s] Oct 27 11:23:37.597: INFO: Created: latency-svc-vwzk7 Oct 27 11:23:37.620: INFO: Got endpoints: latency-svc-vwzk7 [1.126153427s] Oct 27 11:23:37.724: INFO: Created: latency-svc-fclvb Oct 27 11:23:37.728: INFO: Got endpoints: latency-svc-fclvb [1.179032654s] Oct 27 11:23:37.752: INFO: Created: latency-svc-gnhqh Oct 27 11:23:37.765: INFO: Got endpoints: latency-svc-gnhqh [1.119252848s] Oct 27 11:23:37.819: INFO: Created: latency-svc-ht42z Oct 27 11:23:37.869: INFO: Got endpoints: latency-svc-ht42z [1.108757829s] Oct 27 11:23:37.919: INFO: Created: latency-svc-tdzpg Oct 27 11:23:37.927: INFO: Got endpoints: latency-svc-tdzpg [1.125893158s] Oct 27 11:23:37.944: INFO: Created: latency-svc-r86n9 Oct 27 11:23:38.006: INFO: Got endpoints: latency-svc-r86n9 [1.168279348s] Oct 27 11:23:38.050: INFO: Created: latency-svc-6qjt5 Oct 27 11:23:38.067: INFO: Got endpoints: latency-svc-6qjt5 [1.131415202s] Oct 27 11:23:38.097: INFO: Created: latency-svc-c84vk Oct 27 11:23:38.155: INFO: Got endpoints: latency-svc-c84vk [1.17912471s] Oct 27 11:23:38.185: INFO: Created: latency-svc-2vhss Oct 27 11:23:38.198: INFO: Got endpoints: latency-svc-2vhss [1.084046148s] Oct 27 11:23:38.254: INFO: Created: latency-svc-ft9z2 Oct 27 11:23:38.281: INFO: Got endpoints: latency-svc-ft9z2 [1.11837487s] Oct 27 11:23:38.325: INFO: Created: latency-svc-84x7x Oct 27 11:23:38.337: INFO: Got endpoints: latency-svc-84x7x [1.085746071s] Oct 27 11:23:38.419: INFO: Created: latency-svc-7nrxp Oct 27 11:23:38.423: INFO: Got endpoints: latency-svc-7nrxp [1.115482717s] Oct 27 11:23:38.460: INFO: Created: latency-svc-kb524 Oct 27 11:23:38.475: INFO: Got endpoints: latency-svc-kb524 [1.09436934s] Oct 27 11:23:38.517: INFO: Created: latency-svc-gvd9d Oct 27 11:23:38.544: INFO: Got endpoints: latency-svc-gvd9d [1.086247903s] Oct 27 11:23:38.580: INFO: Created: latency-svc-4m98g Oct 27 11:23:38.604: INFO: Got endpoints: latency-svc-4m98g [1.032374709s] Oct 27 11:23:38.635: INFO: Created: latency-svc-6bgd5 Oct 27 11:23:38.676: INFO: Got endpoints: latency-svc-6bgd5 [1.056106813s] Oct 27 11:23:38.703: INFO: Created: latency-svc-58drc Oct 27 11:23:38.727: INFO: Got endpoints: latency-svc-58drc [998.91568ms] Oct 27 11:23:38.758: INFO: Created: latency-svc-5k8vv Oct 27 11:23:38.773: INFO: Got endpoints: latency-svc-5k8vv [1.008088446s] Oct 27 11:23:38.808: INFO: Created: latency-svc-69b7k Oct 27 11:23:38.818: INFO: Got endpoints: latency-svc-69b7k [948.676067ms] Oct 27 11:23:38.838: INFO: Created: latency-svc-gv6nj Oct 27 11:23:38.861: INFO: Got endpoints: latency-svc-gv6nj [933.829068ms] Oct 27 11:23:38.933: INFO: Created: latency-svc-dlb7k Oct 27 11:23:38.939: INFO: Got endpoints: latency-svc-dlb7k [933.653746ms] Oct 27 11:23:39.013: INFO: Created: latency-svc-85q6n Oct 27 11:23:39.023: INFO: Got endpoints: latency-svc-85q6n [955.89767ms] Oct 27 11:23:39.105: INFO: Created: latency-svc-k8x6t Oct 27 11:23:39.119: INFO: Got endpoints: latency-svc-k8x6t [964.381286ms] Oct 27 11:23:39.165: INFO: Created: latency-svc-d47qz Oct 27 11:23:39.216: INFO: Got endpoints: latency-svc-d47qz [1.017782447s] Oct 27 11:23:39.240: INFO: Created: latency-svc-sl55d Oct 27 11:23:39.264: INFO: Got endpoints: latency-svc-sl55d [983.066513ms] Oct 27 11:23:39.335: INFO: Created: latency-svc-bcpz4 Oct 27 11:23:39.339: INFO: Got endpoints: latency-svc-bcpz4 [1.002579753s] Oct 27 11:23:39.381: INFO: Created: latency-svc-xbl4l Oct 27 11:23:39.403: INFO: Got endpoints: latency-svc-xbl4l [979.455431ms] Oct 27 11:23:39.429: INFO: Created: latency-svc-kmt4h Oct 27 11:23:39.485: INFO: Got endpoints: latency-svc-kmt4h [1.010274175s] Oct 27 11:23:39.510: INFO: Created: latency-svc-r97xl Oct 27 11:23:39.523: INFO: Got endpoints: latency-svc-r97xl [978.425034ms] Oct 27 11:23:39.561: INFO: Created: latency-svc-67l8l Oct 27 11:23:39.640: INFO: Got endpoints: latency-svc-67l8l [1.035963064s] Oct 27 11:23:39.672: INFO: Created: latency-svc-5cb8h Oct 27 11:23:39.686: INFO: Got endpoints: latency-svc-5cb8h [1.009385562s] Oct 27 11:23:39.779: INFO: Created: latency-svc-hs87z Oct 27 11:23:39.788: INFO: Got endpoints: latency-svc-hs87z [1.061660168s] Oct 27 11:23:39.831: INFO: Created: latency-svc-jtmlr Oct 27 11:23:39.842: INFO: Got endpoints: latency-svc-jtmlr [1.068912707s] Oct 27 11:23:39.858: INFO: Created: latency-svc-b82jp Oct 27 11:23:39.872: INFO: Got endpoints: latency-svc-b82jp [1.054231861s] Oct 27 11:23:39.922: INFO: Created: latency-svc-56lrt Oct 27 11:23:39.943: INFO: Created: latency-svc-bgk77 Oct 27 11:23:39.943: INFO: Got endpoints: latency-svc-56lrt [1.08256226s] Oct 27 11:23:39.969: INFO: Got endpoints: latency-svc-bgk77 [1.029158915s] Oct 27 11:23:39.999: INFO: Created: latency-svc-2vzrs Oct 27 11:23:40.011: INFO: Got endpoints: latency-svc-2vzrs [988.290818ms] Oct 27 11:23:40.067: INFO: Created: latency-svc-p6vnd Oct 27 11:23:40.086: INFO: Got endpoints: latency-svc-p6vnd [966.607406ms] Oct 27 11:23:40.111: INFO: Created: latency-svc-rpl88 Oct 27 11:23:40.120: INFO: Got endpoints: latency-svc-rpl88 [903.63482ms] Oct 27 11:23:40.209: INFO: Created: latency-svc-tflxv Oct 27 11:23:40.215: INFO: Got endpoints: latency-svc-tflxv [950.731247ms] Oct 27 11:23:40.266: INFO: Created: latency-svc-lh8s7 Oct 27 11:23:40.277: INFO: Got endpoints: latency-svc-lh8s7 [937.314061ms] Oct 27 11:23:40.296: INFO: Created: latency-svc-b4rm5 Oct 27 11:23:40.308: INFO: Got endpoints: latency-svc-b4rm5 [905.036403ms] Oct 27 11:23:40.347: INFO: Created: latency-svc-zsjdj Oct 27 11:23:40.367: INFO: Got endpoints: latency-svc-zsjdj [881.111825ms] Oct 27 11:23:40.401: INFO: Created: latency-svc-5m65w Oct 27 11:23:40.428: INFO: Got endpoints: latency-svc-5m65w [904.752355ms] Oct 27 11:23:40.473: INFO: Created: latency-svc-r96j6 Oct 27 11:23:40.478: INFO: Got endpoints: latency-svc-r96j6 [837.305085ms] Oct 27 11:23:40.512: INFO: Created: latency-svc-jbz74 Oct 27 11:23:40.524: INFO: Got endpoints: latency-svc-jbz74 [838.027168ms] Oct 27 11:23:40.545: INFO: Created: latency-svc-vw4bg Oct 27 11:23:40.560: INFO: Got endpoints: latency-svc-vw4bg [772.012535ms] Oct 27 11:23:40.605: INFO: Created: latency-svc-kf5pf Oct 27 11:23:40.619: INFO: Got endpoints: latency-svc-kf5pf [776.793354ms] Oct 27 11:23:40.638: INFO: Created: latency-svc-ssnht Oct 27 11:23:40.650: INFO: Got endpoints: latency-svc-ssnht [777.936892ms] Oct 27 11:23:40.677: INFO: Created: latency-svc-2rbch Oct 27 11:23:40.687: INFO: Got endpoints: latency-svc-2rbch [743.382541ms] Oct 27 11:23:40.736: INFO: Created: latency-svc-blc8x Oct 27 11:23:40.750: INFO: Got endpoints: latency-svc-blc8x [780.976835ms] Oct 27 11:23:40.776: INFO: Created: latency-svc-hcgn4 Oct 27 11:23:40.803: INFO: Got endpoints: latency-svc-hcgn4 [791.724347ms] Oct 27 11:23:40.827: INFO: Created: latency-svc-6spvw Oct 27 11:23:40.861: INFO: Got endpoints: latency-svc-6spvw [774.8917ms] Oct 27 11:23:40.905: INFO: Created: latency-svc-rlnrv Oct 27 11:23:40.930: INFO: Got endpoints: latency-svc-rlnrv [809.681256ms] Oct 27 11:23:40.950: INFO: Created: latency-svc-xq2rd Oct 27 11:23:41.011: INFO: Got endpoints: latency-svc-xq2rd [796.100557ms] Oct 27 11:23:41.019: INFO: Created: latency-svc-8pr96 Oct 27 11:23:41.031: INFO: Got endpoints: latency-svc-8pr96 [753.867063ms] Oct 27 11:23:41.074: INFO: Created: latency-svc-756vw Oct 27 11:23:41.109: INFO: Got endpoints: latency-svc-756vw [801.73168ms] Oct 27 11:23:41.191: INFO: Created: latency-svc-5qc8p Oct 27 11:23:41.196: INFO: Got endpoints: latency-svc-5qc8p [829.138376ms] Oct 27 11:23:41.221: INFO: Created: latency-svc-wp2rb Oct 27 11:23:41.230: INFO: Got endpoints: latency-svc-wp2rb [802.229995ms] Oct 27 11:23:41.241: INFO: Created: latency-svc-z8s4j Oct 27 11:23:41.255: INFO: Got endpoints: latency-svc-z8s4j [776.894654ms] Oct 27 11:23:41.271: INFO: Created: latency-svc-dqhd6 Oct 27 11:23:41.285: INFO: Got endpoints: latency-svc-dqhd6 [761.442555ms] Oct 27 11:23:41.329: INFO: Created: latency-svc-nlp88 Oct 27 11:23:41.339: INFO: Got endpoints: latency-svc-nlp88 [778.863761ms] Oct 27 11:23:41.462: INFO: Created: latency-svc-frv8m Oct 27 11:23:41.484: INFO: Got endpoints: latency-svc-frv8m [864.743065ms] Oct 27 11:23:41.518: INFO: Created: latency-svc-6ltwl Oct 27 11:23:41.538: INFO: Got endpoints: latency-svc-6ltwl [887.462324ms] Oct 27 11:23:41.617: INFO: Created: latency-svc-vjlbt Oct 27 11:23:41.646: INFO: Got endpoints: latency-svc-vjlbt [959.281579ms] Oct 27 11:23:41.766: INFO: Created: latency-svc-mp7vb Oct 27 11:23:41.790: INFO: Got endpoints: latency-svc-mp7vb [1.040245829s] Oct 27 11:23:41.814: INFO: Created: latency-svc-v6wq7 Oct 27 11:23:41.826: INFO: Got endpoints: latency-svc-v6wq7 [1.022803359s] Oct 27 11:23:41.865: INFO: Created: latency-svc-b9hn5 Oct 27 11:23:41.903: INFO: Got endpoints: latency-svc-b9hn5 [1.042375662s] Oct 27 11:23:41.926: INFO: Created: latency-svc-s77mj Oct 27 11:23:41.941: INFO: Got endpoints: latency-svc-s77mj [1.011241737s] Oct 27 11:23:41.982: INFO: Created: latency-svc-hjxmq Oct 27 11:23:42.041: INFO: Got endpoints: latency-svc-hjxmq [1.030354896s] Oct 27 11:23:42.509: INFO: Created: latency-svc-cjdq5 Oct 27 11:23:42.514: INFO: Got endpoints: latency-svc-cjdq5 [1.483020317s] Oct 27 11:23:43.551: INFO: Created: latency-svc-2pgxv Oct 27 11:23:43.592: INFO: Got endpoints: latency-svc-2pgxv [2.482819217s] Oct 27 11:23:43.650: INFO: Created: latency-svc-5jsjp Oct 27 11:23:43.706: INFO: Got endpoints: latency-svc-5jsjp [2.510467225s] Oct 27 11:23:43.738: INFO: Created: latency-svc-nhgnm Oct 27 11:23:43.746: INFO: Got endpoints: latency-svc-nhgnm [2.516193743s] Oct 27 11:23:43.771: INFO: Created: latency-svc-4frmj Oct 27 11:23:43.794: INFO: Got endpoints: latency-svc-4frmj [2.539216837s] Oct 27 11:23:43.838: INFO: Created: latency-svc-xfq82 Oct 27 11:23:43.848: INFO: Got endpoints: latency-svc-xfq82 [2.56231687s] Oct 27 11:23:43.864: INFO: Created: latency-svc-z9756 Oct 27 11:23:43.882: INFO: Got endpoints: latency-svc-z9756 [2.542696121s] Oct 27 11:23:43.894: INFO: Created: latency-svc-wnsgx Oct 27 11:23:43.908: INFO: Got endpoints: latency-svc-wnsgx [2.424281428s] Oct 27 11:23:43.927: INFO: Created: latency-svc-sp8gn Oct 27 11:23:43.981: INFO: Got endpoints: latency-svc-sp8gn [2.443479332s] Oct 27 11:23:43.985: INFO: Created: latency-svc-swd4h Oct 27 11:23:43.993: INFO: Got endpoints: latency-svc-swd4h [2.346430158s] Oct 27 11:23:44.011: INFO: Created: latency-svc-vh4b5 Oct 27 11:23:44.033: INFO: Got endpoints: latency-svc-vh4b5 [2.242834516s] Oct 27 11:23:44.057: INFO: Created: latency-svc-448rt Oct 27 11:23:44.066: INFO: Got endpoints: latency-svc-448rt [2.239778886s] Oct 27 11:23:44.080: INFO: Created: latency-svc-jtdfb Oct 27 11:23:44.119: INFO: Got endpoints: latency-svc-jtdfb [2.215627497s] Oct 27 11:23:44.131: INFO: Created: latency-svc-2gmzp Oct 27 11:23:44.155: INFO: Got endpoints: latency-svc-2gmzp [2.214306219s] Oct 27 11:23:44.180: INFO: Created: latency-svc-w28rv Oct 27 11:23:44.212: INFO: Got endpoints: latency-svc-w28rv [2.169943419s] Oct 27 11:23:44.305: INFO: Created: latency-svc-s9wzm Oct 27 11:23:44.309: INFO: Got endpoints: latency-svc-s9wzm [1.795111704s] Oct 27 11:23:44.372: INFO: Created: latency-svc-r5crt Oct 27 11:23:44.443: INFO: Got endpoints: latency-svc-r5crt [850.087118ms] Oct 27 11:23:44.465: INFO: Created: latency-svc-sjpnr Oct 27 11:23:44.475: INFO: Got endpoints: latency-svc-sjpnr [769.031322ms] Oct 27 11:23:44.497: INFO: Created: latency-svc-fv4pm Oct 27 11:23:44.512: INFO: Got endpoints: latency-svc-fv4pm [765.834776ms] Oct 27 11:23:44.584: INFO: Created: latency-svc-brbjv Oct 27 11:23:44.586: INFO: Got endpoints: latency-svc-brbjv [792.378941ms] Oct 27 11:23:44.621: INFO: Created: latency-svc-kk447 Oct 27 11:23:44.633: INFO: Got endpoints: latency-svc-kk447 [784.998706ms] Oct 27 11:23:44.671: INFO: Created: latency-svc-mqr5x Oct 27 11:23:44.724: INFO: Got endpoints: latency-svc-mqr5x [841.758194ms] Oct 27 11:23:44.750: INFO: Created: latency-svc-vx59n Oct 27 11:23:44.759: INFO: Got endpoints: latency-svc-vx59n [850.56121ms] Oct 27 11:23:44.776: INFO: Created: latency-svc-crd8d Oct 27 11:23:44.789: INFO: Got endpoints: latency-svc-crd8d [807.840381ms] Oct 27 11:23:44.807: INFO: Created: latency-svc-l6sgh Oct 27 11:23:44.862: INFO: Got endpoints: latency-svc-l6sgh [868.766871ms] Oct 27 11:23:44.866: INFO: Created: latency-svc-ftlnt Oct 27 11:23:44.886: INFO: Got endpoints: latency-svc-ftlnt [853.040041ms] Oct 27 11:23:44.929: INFO: Created: latency-svc-j86t9 Oct 27 11:23:44.946: INFO: Got endpoints: latency-svc-j86t9 [880.249807ms] Oct 27 11:23:45.018: INFO: Created: latency-svc-rnckk Oct 27 11:23:45.035: INFO: Got endpoints: latency-svc-rnckk [915.866144ms] Oct 27 11:23:45.059: INFO: Created: latency-svc-c77cr Oct 27 11:23:45.072: INFO: Got endpoints: latency-svc-c77cr [916.454789ms] Oct 27 11:23:45.097: INFO: Created: latency-svc-ckpz4 Oct 27 11:23:45.107: INFO: Got endpoints: latency-svc-ckpz4 [895.567427ms] Oct 27 11:23:45.149: INFO: Created: latency-svc-k94x9 Oct 27 11:23:45.153: INFO: Got endpoints: latency-svc-k94x9 [844.251202ms] Oct 27 11:23:45.173: INFO: Created: latency-svc-hztkh Oct 27 11:23:45.186: INFO: Got endpoints: latency-svc-hztkh [743.289803ms] Oct 27 11:23:45.202: INFO: Created: latency-svc-nf6rv Oct 27 11:23:45.216: INFO: Got endpoints: latency-svc-nf6rv [740.689584ms] Oct 27 11:23:45.232: INFO: Created: latency-svc-jz2cf Oct 27 11:23:45.247: INFO: Got endpoints: latency-svc-jz2cf [734.584077ms] Oct 27 11:23:45.308: INFO: Created: latency-svc-f7cmz Oct 27 11:23:45.332: INFO: Got endpoints: latency-svc-f7cmz [745.096194ms] Oct 27 11:23:45.353: INFO: Created: latency-svc-cd25x Oct 27 11:23:45.367: INFO: Got endpoints: latency-svc-cd25x [734.685613ms] Oct 27 11:23:45.420: INFO: Created: latency-svc-6lrqz Oct 27 11:23:45.428: INFO: Got endpoints: latency-svc-6lrqz [703.504844ms] Oct 27 11:23:45.449: INFO: Created: latency-svc-wqln8 Oct 27 11:23:45.465: INFO: Got endpoints: latency-svc-wqln8 [705.621592ms] Oct 27 11:23:45.475: INFO: Created: latency-svc-tnxfl Oct 27 11:23:45.488: INFO: Got endpoints: latency-svc-tnxfl [698.919805ms] Oct 27 11:23:45.506: INFO: Created: latency-svc-8xvbw Oct 27 11:23:45.593: INFO: Got endpoints: latency-svc-8xvbw [731.019333ms] Oct 27 11:23:45.594: INFO: Created: latency-svc-8vgqt Oct 27 11:23:45.609: INFO: Got endpoints: latency-svc-8vgqt [722.74521ms] Oct 27 11:23:45.662: INFO: Created: latency-svc-hhvbz Oct 27 11:23:45.682: INFO: Got endpoints: latency-svc-hhvbz [735.426791ms] Oct 27 11:23:45.748: INFO: Created: latency-svc-2h4l7 Oct 27 11:23:45.754: INFO: Got endpoints: latency-svc-2h4l7 [719.265082ms] Oct 27 11:23:45.805: INFO: Created: latency-svc-vmxgh Oct 27 11:23:45.820: INFO: Got endpoints: latency-svc-vmxgh [748.669681ms] Oct 27 11:23:45.915: INFO: Created: latency-svc-jzv9c Oct 27 11:23:45.921: INFO: Got endpoints: latency-svc-jzv9c [814.272276ms] Oct 27 11:23:45.964: INFO: Created: latency-svc-hz5gb Oct 27 11:23:45.976: INFO: Got endpoints: latency-svc-hz5gb [822.419887ms] Oct 27 11:23:45.991: INFO: Created: latency-svc-wfpms Oct 27 11:23:46.007: INFO: Got endpoints: latency-svc-wfpms [820.666682ms] Oct 27 11:23:46.048: INFO: Created: latency-svc-wz492 Oct 27 11:23:46.051: INFO: Got endpoints: latency-svc-wz492 [835.06646ms] Oct 27 11:23:46.076: INFO: Created: latency-svc-hqgpj Oct 27 11:23:46.096: INFO: Got endpoints: latency-svc-hqgpj [849.617894ms] Oct 27 11:23:46.121: INFO: Created: latency-svc-wzkn9 Oct 27 11:23:46.133: INFO: Got endpoints: latency-svc-wzkn9 [801.751612ms] Oct 27 11:23:46.174: INFO: Created: latency-svc-gkks6 Oct 27 11:23:46.214: INFO: Got endpoints: latency-svc-gkks6 [846.322733ms] Oct 27 11:23:46.214: INFO: Created: latency-svc-25gdv Oct 27 11:23:46.262: INFO: Got endpoints: latency-svc-25gdv [834.124792ms] Oct 27 11:23:46.317: INFO: Created: latency-svc-z29x4 Oct 27 11:23:46.321: INFO: Got endpoints: latency-svc-z29x4 [855.860469ms] Oct 27 11:23:46.343: INFO: Created: latency-svc-mbx6j Oct 27 11:23:46.357: INFO: Got endpoints: latency-svc-mbx6j [868.541939ms] Oct 27 11:23:46.373: INFO: Created: latency-svc-clz6j Oct 27 11:23:46.391: INFO: Got endpoints: latency-svc-clz6j [798.432646ms] Oct 27 11:23:46.416: INFO: Created: latency-svc-r56nc Oct 27 11:23:46.449: INFO: Got endpoints: latency-svc-r56nc [839.823435ms] Oct 27 11:23:46.464: INFO: Created: latency-svc-h5svc Oct 27 11:23:46.477: INFO: Got endpoints: latency-svc-h5svc [795.613947ms] Oct 27 11:23:46.498: INFO: Created: latency-svc-zpqqr Oct 27 11:23:46.521: INFO: Got endpoints: latency-svc-zpqqr [766.548902ms] Oct 27 11:23:46.545: INFO: Created: latency-svc-5226v Oct 27 11:23:46.616: INFO: Got endpoints: latency-svc-5226v [795.799067ms] Oct 27 11:23:46.619: INFO: Created: latency-svc-bxzg8 Oct 27 11:23:46.623: INFO: Got endpoints: latency-svc-bxzg8 [701.560423ms] Oct 27 11:23:46.650: INFO: Created: latency-svc-lzfwx Oct 27 11:23:46.659: INFO: Got endpoints: latency-svc-lzfwx [682.943327ms] Oct 27 11:23:46.677: INFO: Created: latency-svc-r8882 Oct 27 11:23:46.688: INFO: Got endpoints: latency-svc-r8882 [681.720542ms] Oct 27 11:23:46.761: INFO: Created: latency-svc-7njgn Oct 27 11:23:46.778: INFO: Got endpoints: latency-svc-7njgn [727.113464ms] Oct 27 11:23:46.779: INFO: Created: latency-svc-x2ftb Oct 27 11:23:46.811: INFO: Got endpoints: latency-svc-x2ftb [715.084513ms] Oct 27 11:23:46.853: INFO: Created: latency-svc-j7q47 Oct 27 11:23:46.885: INFO: Got endpoints: latency-svc-j7q47 [751.671097ms] Oct 27 11:23:46.898: INFO: Created: latency-svc-p5z26 Oct 27 11:23:46.908: INFO: Got endpoints: latency-svc-p5z26 [693.92041ms] Oct 27 11:23:46.923: INFO: Created: latency-svc-bfsf6 Oct 27 11:23:46.932: INFO: Got endpoints: latency-svc-bfsf6 [670.143099ms] Oct 27 11:23:46.965: INFO: Created: latency-svc-z767s Oct 27 11:23:46.981: INFO: Got endpoints: latency-svc-z767s [660.534129ms] Oct 27 11:23:47.024: INFO: Created: latency-svc-fbsdn Oct 27 11:23:47.029: INFO: Got endpoints: latency-svc-fbsdn [671.929655ms] Oct 27 11:23:47.074: INFO: Created: latency-svc-s29j7 Oct 27 11:23:47.083: INFO: Got endpoints: latency-svc-s29j7 [691.969984ms] Oct 27 11:23:47.102: INFO: Created: latency-svc-9wgmf Oct 27 11:23:47.114: INFO: Got endpoints: latency-svc-9wgmf [665.180798ms] Oct 27 11:23:47.175: INFO: Created: latency-svc-4j7ms Oct 27 11:23:47.192: INFO: Got endpoints: latency-svc-4j7ms [714.771424ms] Oct 27 11:23:47.226: INFO: Created: latency-svc-p44gd Oct 27 11:23:47.240: INFO: Got endpoints: latency-svc-p44gd [719.000353ms] Oct 27 11:23:47.294: INFO: Created: latency-svc-k9jx7 Oct 27 11:23:47.297: INFO: Got endpoints: latency-svc-k9jx7 [680.77154ms] Oct 27 11:23:47.328: INFO: Created: latency-svc-p4dl5 Oct 27 11:23:47.349: INFO: Got endpoints: latency-svc-p4dl5 [725.574318ms] Oct 27 11:23:47.373: INFO: Created: latency-svc-mcv7r Oct 27 11:23:47.385: INFO: Got endpoints: latency-svc-mcv7r [726.873976ms] Oct 27 11:23:47.449: INFO: Created: latency-svc-j9tm9 Oct 27 11:23:47.457: INFO: Got endpoints: latency-svc-j9tm9 [768.607947ms] Oct 27 11:23:47.490: INFO: Created: latency-svc-9kqhg Oct 27 11:23:47.500: INFO: Got endpoints: latency-svc-9kqhg [721.157459ms] Oct 27 11:23:47.535: INFO: Created: latency-svc-q8pr2 Oct 27 11:23:47.718: INFO: Got endpoints: latency-svc-q8pr2 [907.022181ms] Oct 27 11:23:47.742: INFO: Created: latency-svc-ggvrn Oct 27 11:23:47.764: INFO: Got endpoints: latency-svc-ggvrn [879.237613ms] Oct 27 11:23:47.880: INFO: Created: latency-svc-l4lzc Oct 27 11:23:47.883: INFO: Got endpoints: latency-svc-l4lzc [975.055239ms] Oct 27 11:23:48.006: INFO: Created: latency-svc-ntcjm Oct 27 11:23:48.017: INFO: Got endpoints: latency-svc-ntcjm [1.084657147s] Oct 27 11:23:48.062: INFO: Created: latency-svc-pxxq6 Oct 27 11:23:48.077: INFO: Got endpoints: latency-svc-pxxq6 [1.096158881s] Oct 27 11:23:48.093: INFO: Created: latency-svc-hvtv4 Oct 27 11:23:48.155: INFO: Got endpoints: latency-svc-hvtv4 [1.126355193s] Oct 27 11:23:48.335: INFO: Created: latency-svc-bhb8d Oct 27 11:23:48.772: INFO: Got endpoints: latency-svc-bhb8d [1.688944994s] Oct 27 11:23:48.789: INFO: Created: latency-svc-bmwcb Oct 27 11:23:48.800: INFO: Got endpoints: latency-svc-bmwcb [1.68612494s] Oct 27 11:23:48.826: INFO: Created: latency-svc-sxljp Oct 27 11:23:48.836: INFO: Got endpoints: latency-svc-sxljp [1.643845593s] Oct 27 11:23:48.868: INFO: Created: latency-svc-p67hg Oct 27 11:23:48.891: INFO: Got endpoints: latency-svc-p67hg [1.650915997s] Oct 27 11:23:49.336: INFO: Created: latency-svc-6g6n9 Oct 27 11:23:49.339: INFO: Got endpoints: latency-svc-6g6n9 [2.041882472s] Oct 27 11:23:49.339: INFO: Latencies: [69.874094ms 125.214829ms 165.090238ms 236.417242ms 317.614798ms 412.883271ms 436.876554ms 549.251812ms 581.800007ms 641.63944ms 660.534129ms 665.180798ms 670.143099ms 671.929655ms 680.77154ms 681.720542ms 682.943327ms 691.969984ms 693.92041ms 698.919805ms 701.560423ms 703.504844ms 705.621592ms 714.771424ms 715.084513ms 719.000353ms 719.265082ms 721.157459ms 722.74521ms 725.574318ms 726.873976ms 727.113464ms 731.019333ms 734.584077ms 734.685613ms 735.426791ms 740.689584ms 743.289803ms 743.382541ms 745.096194ms 748.669681ms 749.350579ms 751.671097ms 753.867063ms 761.442555ms 765.834776ms 766.548902ms 768.607947ms 769.031322ms 772.012535ms 774.8917ms 776.793354ms 776.894654ms 777.936892ms 778.863761ms 780.976835ms 784.998706ms 791.724347ms 792.378941ms 795.613947ms 795.799067ms 796.100557ms 798.432646ms 801.73168ms 801.751612ms 802.229995ms 806.161119ms 807.840381ms 809.681256ms 814.272276ms 820.666682ms 822.419887ms 829.138376ms 834.124792ms 835.06646ms 837.305085ms 838.027168ms 839.823435ms 841.758194ms 844.251202ms 846.322733ms 849.617894ms 850.087118ms 850.56121ms 853.040041ms 855.860469ms 856.20188ms 857.069089ms 864.743065ms 868.541939ms 868.766871ms 869.16433ms 879.237613ms 880.249807ms 881.111825ms 887.462324ms 895.567427ms 898.128135ms 898.52175ms 903.63482ms 904.752355ms 905.036403ms 907.022181ms 911.950369ms 915.866144ms 916.454789ms 928.049839ms 933.653746ms 933.829068ms 937.314061ms 948.676067ms 950.731247ms 955.89767ms 959.281579ms 962.539955ms 964.381286ms 966.607406ms 975.055239ms 978.425034ms 979.455431ms 982.42776ms 983.066513ms 985.056827ms 988.290818ms 991.496041ms 998.91568ms 1.002579753s 1.008088446s 1.008477652s 1.009385562s 1.010274175s 1.011241737s 1.016520833s 1.017782447s 1.022803359s 1.025611366s 1.028892268s 1.029158915s 1.030354896s 1.032374709s 1.032597486s 1.035963064s 1.036149999s 1.040245829s 1.042010158s 1.042375662s 1.047524924s 1.05010244s 1.054231861s 1.054273449s 1.056106813s 1.061660168s 1.063171635s 1.068912707s 1.08234046s 1.08256226s 1.084046148s 1.084657147s 1.085695761s 1.085746071s 1.086247903s 1.088329735s 1.088438115s 1.09436934s 1.096158881s 1.103121775s 1.10697444s 1.108757829s 1.115482717s 1.11837487s 1.119252848s 1.125893158s 1.126153427s 1.126355193s 1.131415202s 1.165934313s 1.168279348s 1.179032654s 1.17912471s 1.483020317s 1.643845593s 1.650915997s 1.68612494s 1.688944994s 1.795111704s 2.041882472s 2.169943419s 2.214306219s 2.215627497s 2.239778886s 2.242834516s 2.346430158s 2.424281428s 2.443479332s 2.482819217s 2.510467225s 2.516193743s 2.539216837s 2.542696121s 2.56231687s] Oct 27 11:23:49.339: INFO: 50 %ile: 904.752355ms Oct 27 11:23:49.339: INFO: 90 %ile: 1.643845593s Oct 27 11:23:49.339: INFO: 99 %ile: 2.542696121s Oct 27 11:23:49.339: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:49.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1699" for this suite. • [SLOW TEST:17.949 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":174,"skipped":2760,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:49.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:49.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9726" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":175,"skipped":2768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:49.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Oct 27 11:23:49.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5901' Oct 27 11:23:53.523: INFO: stderr: "" Oct 27 11:23:53.523: INFO: stdout: "pod/pause created\n" Oct 27 11:23:53.523: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 27 11:23:53.523: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5901" to be "running and ready" Oct 27 11:23:53.539: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.516403ms Oct 27 11:23:55.565: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041572091s Oct 27 11:23:57.622: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.099168735s Oct 27 11:23:57.622: INFO: Pod "pause" satisfied condition "running and ready" Oct 27 11:23:57.622: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Oct 27 11:23:57.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5901' Oct 27 11:23:57.756: INFO: stderr: "" Oct 27 11:23:57.756: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 27 11:23:57.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5901' Oct 27 11:23:57.879: INFO: stderr: "" Oct 27 11:23:57.879: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 27 11:23:57.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5901' Oct 27 11:23:58.020: INFO: stderr: "" Oct 27 11:23:58.020: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 27 11:23:58.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5901' Oct 27 11:23:58.162: INFO: stderr: "" Oct 27 11:23:58.162: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Oct 27 11:23:58.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5901' Oct 27 11:23:58.339: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:23:58.339: INFO: stdout: "pod \"pause\" force deleted\n" Oct 27 11:23:58.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5901' Oct 27 11:23:58.657: INFO: stderr: "No resources found in kubectl-5901 namespace.\n" Oct 27 11:23:58.657: INFO: stdout: "" Oct 27 11:23:58.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5901 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 27 11:23:58.843: INFO: stderr: "" Oct 27 11:23:58.843: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:23:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5901" for this suite. • [SLOW TEST:9.309 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":176,"skipped":2797,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:23:58.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-cf900641-f9ab-419c-ad80-b8e542e999d1 STEP: Creating a pod to test consume configMaps Oct 27 11:23:59.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463" in namespace "configmap-8284" to be "Succeeded or Failed" Oct 27 11:23:59.269: INFO: Pod "pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463": Phase="Pending", Reason="", readiness=false. Elapsed: 227.382802ms Oct 27 11:24:01.515: INFO: Pod "pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472720754s Oct 27 11:24:03.523: INFO: Pod "pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.480759237s STEP: Saw pod success Oct 27 11:24:03.523: INFO: Pod "pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463" satisfied condition "Succeeded or Failed" Oct 27 11:24:03.526: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463 container configmap-volume-test: STEP: delete the pod Oct 27 11:24:03.569: INFO: Waiting for pod pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463 to disappear Oct 27 11:24:03.574: INFO: Pod pod-configmaps-e536fd41-dd42-462e-a019-4624ce871463 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:24:03.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8284" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2807,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:24:03.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:24:04.684: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:24:06.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:24:08.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394644, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:24:12.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:13.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:14.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:15.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:16.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:17.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:18.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Oct 27 11:24:19.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:24:19.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5339" for this suite. STEP: Destroying namespace "webhook-5339-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.714 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":178,"skipped":2811,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:24:19.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:24:19.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b" in namespace "projected-3682" to be "Succeeded or Failed" Oct 27 11:24:19.880: INFO: Pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b": Phase="Pending", Reason="", readiness=false. Elapsed: 83.705262ms Oct 27 11:24:21.970: INFO: Pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173646258s Oct 27 11:24:23.974: INFO: Pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178048044s Oct 27 11:24:25.980: INFO: Pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183486852s STEP: Saw pod success Oct 27 11:24:25.980: INFO: Pod "downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b" satisfied condition "Succeeded or Failed" Oct 27 11:24:25.984: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b container client-container: STEP: delete the pod Oct 27 11:24:26.038: INFO: Waiting for pod downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b to disappear Oct 27 11:24:26.047: INFO: Pod downwardapi-volume-fcc27948-1597-49e7-84ef-e0165bdb351b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:24:26.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3682" for this suite. • [SLOW TEST:6.442 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2812,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:24:26.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-ckqdl in namespace proxy-1542 I1027 11:24:26.197450 7 runners.go:190] Created replication controller with name: proxy-service-ckqdl, namespace: proxy-1542, replica count: 1 I1027 11:24:27.247870 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:24:28.248117 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:24:29.248400 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:30.248591 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:31.248786 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:32.248980 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:33.249176 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:34.249388 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:35.249632 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:36.249838 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:37.250021 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1027 11:24:38.250264 7 runners.go:190] proxy-service-ckqdl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:24:38.254: INFO: setup took 12.161090859s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 27 11:24:38.265: INFO: (0) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 10.323725ms) Oct 27 11:24:38.265: INFO: (0) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 10.749335ms) Oct 27 11:24:38.265: INFO: (0) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 10.865765ms) Oct 27 11:24:38.265: INFO: (0) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 10.667899ms) Oct 27 11:24:38.265: INFO: (0) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 10.804519ms) Oct 27 11:24:38.266: INFO: (0) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 11.224235ms) Oct 27 11:24:38.266: INFO: (0) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 11.256882ms) Oct 27 11:24:38.266: INFO: (0) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 11.38302ms) Oct 27 11:24:38.266: INFO: (0) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 11.468841ms) Oct 27 11:24:38.266: INFO: (0) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 11.443232ms) Oct 27 11:24:38.268: INFO: (0) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 14.065501ms) Oct 27 11:24:38.271: INFO: (0) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 16.74235ms) Oct 27 11:24:38.271: INFO: (0) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 16.549372ms) Oct 27 11:24:38.271: INFO: (0) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 16.803419ms) Oct 27 11:24:38.271: INFO: (0) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 16.732435ms) Oct 27 11:24:38.272: INFO: (0) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test<... (200; 4.058764ms) Oct 27 11:24:38.276: INFO: (1) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 4.157747ms) Oct 27 11:24:38.276: INFO: (1) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 4.182357ms) Oct 27 11:24:38.276: INFO: (1) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: ... (200; 4.261312ms) Oct 27 11:24:38.277: INFO: (1) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 4.657535ms) Oct 27 11:24:38.277: INFO: (1) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.584781ms) Oct 27 11:24:38.277: INFO: (1) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.600386ms) Oct 27 11:24:38.278: INFO: (1) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 6.022429ms) Oct 27 11:24:38.278: INFO: (1) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 6.266555ms) Oct 27 11:24:38.278: INFO: (1) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 6.405218ms) Oct 27 11:24:38.278: INFO: (1) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 6.42603ms) Oct 27 11:24:38.283: INFO: (2) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 4.8574ms) Oct 27 11:24:38.283: INFO: (2) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 4.896654ms) Oct 27 11:24:38.283: INFO: (2) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 4.886467ms) Oct 27 11:24:38.283: INFO: (2) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.037716ms) Oct 27 11:24:38.283: INFO: (2) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 5.095191ms) Oct 27 11:24:38.284: INFO: (2) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 5.23466ms) Oct 27 11:24:38.284: INFO: (2) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 5.207894ms) Oct 27 11:24:38.284: INFO: (2) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.234376ms) Oct 27 11:24:38.285: INFO: (2) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 6.523711ms) Oct 27 11:24:38.285: INFO: (2) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 6.684138ms) Oct 27 11:24:38.286: INFO: (2) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 7.296321ms) Oct 27 11:24:38.286: INFO: (2) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 7.737914ms) Oct 27 11:24:38.286: INFO: (2) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 7.681702ms) Oct 27 11:24:38.286: INFO: (2) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test<... (200; 4.471778ms) Oct 27 11:24:38.291: INFO: (3) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.177661ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 5.444145ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.532056ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.632376ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 6.017133ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 5.977451ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 6.051601ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 6.049931ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 6.21866ms) Oct 27 11:24:38.292: INFO: (3) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 6.142424ms) Oct 27 11:24:38.293: INFO: (3) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 6.503424ms) Oct 27 11:24:38.293: INFO: (3) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: ... (200; 2.151445ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 6.208783ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 6.170331ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 6.22119ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 6.207028ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 6.320781ms) Oct 27 11:24:38.299: INFO: (4) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 6.246283ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 6.643674ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 6.652493ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 7.017024ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 7.009308ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 6.954743ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 6.960727ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 7.001315ms) Oct 27 11:24:38.300: INFO: (4) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test<... (200; 2.986821ms) Oct 27 11:24:38.303: INFO: (5) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 3.201102ms) Oct 27 11:24:38.303: INFO: (5) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: ... (200; 4.238386ms) Oct 27 11:24:38.304: INFO: (5) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 4.354019ms) Oct 27 11:24:38.306: INFO: (5) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 5.668577ms) Oct 27 11:24:38.306: INFO: (5) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 5.802133ms) Oct 27 11:24:38.306: INFO: (5) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 5.835297ms) Oct 27 11:24:38.306: INFO: (5) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 5.856343ms) Oct 27 11:24:38.309: INFO: (6) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 2.779095ms) Oct 27 11:24:38.309: INFO: (6) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 2.772242ms) Oct 27 11:24:38.309: INFO: (6) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 3.173992ms) Oct 27 11:24:38.309: INFO: (6) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: ... (200; 3.246577ms) Oct 27 11:24:38.310: INFO: (6) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 4.438994ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 4.424029ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 4.626598ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 4.969477ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 4.897059ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 4.946096ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 4.99652ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 4.932906ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 5.065922ms) Oct 27 11:24:38.311: INFO: (6) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 4.997489ms) Oct 27 11:24:38.321: INFO: (7) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 9.559778ms) Oct 27 11:24:38.323: INFO: (7) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 12.040464ms) Oct 27 11:24:38.323: INFO: (7) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 12.307466ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 12.557885ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 12.566356ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 12.628504ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 12.635006ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 12.651432ms) Oct 27 11:24:38.324: INFO: (7) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 12.776931ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 13.369108ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 13.448031ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 13.398169ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 13.456021ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 13.613644ms) Oct 27 11:24:38.325: INFO: (7) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 4.200404ms) Oct 27 11:24:38.329: INFO: (8) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 4.18379ms) Oct 27 11:24:38.330: INFO: (8) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 4.60984ms) Oct 27 11:24:38.330: INFO: (8) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 4.943281ms) Oct 27 11:24:38.330: INFO: (8) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.004686ms) Oct 27 11:24:38.330: INFO: (8) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.174358ms) Oct 27 11:24:38.330: INFO: (8) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 5.268276ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 5.693075ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 5.679827ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 5.736324ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 6.058197ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 6.013806ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 6.067782ms) Oct 27 11:24:38.331: INFO: (8) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 3.360922ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.483006ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.760476ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 3.649934ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.673148ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 3.665443ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 3.719339ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.973414ms) Oct 27 11:24:38.335: INFO: (9) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 5.651405ms) Oct 27 11:24:38.342: INFO: (10) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.656563ms) Oct 27 11:24:38.342: INFO: (10) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 5.731399ms) Oct 27 11:24:38.342: INFO: (10) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 5.675038ms) Oct 27 11:24:38.343: INFO: (10) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.807553ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 3.979809ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 4.07993ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 4.011077ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 4.110398ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 4.091698ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 4.151434ms) Oct 27 11:24:38.347: INFO: (11) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 4.260821ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 5.121984ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.175999ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.234527ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 5.18683ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.222296ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.288157ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 5.473604ms) Oct 27 11:24:38.348: INFO: (11) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.608613ms) Oct 27 11:24:38.353: INFO: (12) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 4.677526ms) Oct 27 11:24:38.353: INFO: (12) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 4.638895ms) Oct 27 11:24:38.353: INFO: (12) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 4.651101ms) Oct 27 11:24:38.353: INFO: (12) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 4.549769ms) Oct 27 11:24:38.353: INFO: (12) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 4.784302ms) Oct 27 11:24:38.354: INFO: (12) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 5.54301ms) Oct 27 11:24:38.354: INFO: (12) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.573153ms) Oct 27 11:24:38.356: INFO: (12) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 7.554327ms) Oct 27 11:24:38.356: INFO: (12) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 7.662587ms) Oct 27 11:24:38.356: INFO: (12) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 7.631386ms) Oct 27 11:24:38.356: INFO: (12) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 7.741764ms) Oct 27 11:24:38.356: INFO: (12) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 7.861744ms) Oct 27 11:24:38.358: INFO: (12) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 9.57448ms) Oct 27 11:24:38.358: INFO: (12) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 9.517631ms) Oct 27 11:24:38.358: INFO: (12) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 9.57079ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.710111ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.800251ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 3.735671ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.79061ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 3.826681ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 4.071676ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 4.298499ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 4.320187ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 4.29949ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 4.303213ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 4.410288ms) Oct 27 11:24:38.362: INFO: (13) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 4.48241ms) Oct 27 11:24:38.363: INFO: (13) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 4.696593ms) Oct 27 11:24:38.363: INFO: (13) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 4.705086ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 3.122927ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 3.139116ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.066806ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 3.419745ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.495308ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 3.579785ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.59115ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.711724ms) Oct 27 11:24:38.366: INFO: (14) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: ... (200; 3.714919ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 3.869681ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 4.156715ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 4.162869ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 4.187125ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 4.167938ms) Oct 27 11:24:38.367: INFO: (14) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 4.198377ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 3.397955ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 3.493398ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 3.477867ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 3.417035ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 3.459075ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.459092ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.586148ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 3.55607ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 3.546272ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test<... (200; 3.837398ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 3.815915ms) Oct 27 11:24:38.371: INFO: (15) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 3.911706ms) Oct 27 11:24:38.413: INFO: (15) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 46.160941ms) Oct 27 11:24:38.413: INFO: (15) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 46.121863ms) Oct 27 11:24:38.417: INFO: (16) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 4.075698ms) Oct 27 11:24:38.417: INFO: (16) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 3.949749ms) Oct 27 11:24:38.426: INFO: (16) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 12.869401ms) Oct 27 11:24:38.427: INFO: (16) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 13.836725ms) Oct 27 11:24:38.427: INFO: (16) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 14.657593ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 14.645981ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 14.695553ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 14.609168ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 14.695401ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 14.707438ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 14.790682ms) Oct 27 11:24:38.428: INFO: (16) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 15.027657ms) Oct 27 11:24:38.433: INFO: (17) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 4.65078ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 4.91565ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 5.062599ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 5.273521ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 5.228911ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 5.501263ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 5.473784ms) Oct 27 11:24:38.434: INFO: (17) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 6.245597ms) Oct 27 11:24:38.435: INFO: (17) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 6.267682ms) Oct 27 11:24:38.435: INFO: (17) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 6.196883ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:1080/proxy/: test<... (200; 4.889692ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 4.811624ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.052332ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 5.205416ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 5.134714ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.197331ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.117298ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 5.181429ms) Oct 27 11:24:38.440: INFO: (18) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test (200; 5.186703ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname1/proxy/: foo (200; 6.931189ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 6.94226ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname2/proxy/: bar (200; 6.963205ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname1/proxy/: tls baz (200; 6.994519ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/services/http:proxy-service-ckqdl:portname2/proxy/: bar (200; 7.026706ms) Oct 27 11:24:38.442: INFO: (18) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 7.036447ms) Oct 27 11:24:38.444: INFO: (19) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:460/proxy/: tls baz (200; 2.101006ms) Oct 27 11:24:38.446: INFO: (19) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v/proxy/: test (200; 3.35917ms) Oct 27 11:24:38.446: INFO: (19) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 3.953836ms) Oct 27 11:24:38.446: INFO: (19) /api/v1/namespaces/proxy-1542/services/https:proxy-service-ckqdl:tlsportname2/proxy/: tls qux (200; 4.268134ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/pods/http:proxy-service-ckqdl-dmk6v:1080/proxy/: ... (200; 4.504541ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/services/proxy-service-ckqdl:portname1/proxy/: foo (200; 4.638451ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:443/proxy/: test<... (200; 5.297768ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:160/proxy/: foo (200; 5.246866ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/pods/https:proxy-service-ckqdl-dmk6v:462/proxy/: tls qux (200; 5.268994ms) Oct 27 11:24:38.447: INFO: (19) /api/v1/namespaces/proxy-1542/pods/proxy-service-ckqdl-dmk6v:162/proxy/: bar (200; 5.352759ms) STEP: deleting ReplicationController proxy-service-ckqdl in namespace proxy-1542, will wait for the garbage collector to delete the pods Oct 27 11:24:38.507: INFO: Deleting ReplicationController proxy-service-ckqdl took: 7.152196ms Oct 27 11:24:38.607: INFO: Terminating ReplicationController proxy-service-ckqdl pods took: 100.203457ms [AfterEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:24:41.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1542" for this suite. • [SLOW TEST:15.192 seconds] [sig-network] Proxy /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":180,"skipped":2813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:24:41.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:24:41.307: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 27 11:24:41.324: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 27 11:24:46.339: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 27 11:24:46.339: INFO: Creating deployment "test-rolling-update-deployment" Oct 27 11:24:46.343: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 27 11:24:46.419: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Oct 27 11:24:48.428: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 27 11:24:48.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394686, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394686, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394686, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394686, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:24:50.435: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 27 11:24:50.444: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3152 /apis/apps/v1/namespaces/deployment-3152/deployments/test-rolling-update-deployment 97f95313-9abc-4bdd-bc11-7c854f920833 8976175 1 2020-10-27 11:24:46 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-10-27 11:24:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 11:24:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004178878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-27 11:24:46 +0000 UTC,LastTransitionTime:2020-10-27 11:24:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-10-27 11:24:49 +0000 UTC,LastTransitionTime:2020-10-27 11:24:46 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 27 11:24:50.446: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-3152 /apis/apps/v1/namespaces/deployment-3152/replicasets/test-rolling-update-deployment-c4cb8d6d9 be5d3f09-7c03-4aad-9e87-08ec6c4577cf 8976164 1 2020-10-27 11:24:46 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 97f95313-9abc-4bdd-bc11-7c854f920833 0xc0042984d0 0xc0042984d1}] [] [{kube-controller-manager Update apps/v1 2020-10-27 11:24:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97f95313-9abc-4bdd-bc11-7c854f920833\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004298548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:24:50.446: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 27 11:24:50.446: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3152 /apis/apps/v1/namespaces/deployment-3152/replicasets/test-rolling-update-controller 2f20002d-8907-46ca-98e0-8e38c1c1028b 8976174 2 2020-10-27 11:24:41 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 97f95313-9abc-4bdd-bc11-7c854f920833 0xc0042983c7 0xc0042983c8}] [] [{e2e.test Update apps/v1 2020-10-27 11:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 11:24:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"97f95313-9abc-4bdd-bc11-7c854f920833\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004298468 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:24:50.448: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-tp6tg" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-tp6tg test-rolling-update-deployment-c4cb8d6d9- deployment-3152 /api/v1/namespaces/deployment-3152/pods/test-rolling-update-deployment-c4cb8d6d9-tp6tg 8f67dec1-6e3f-405c-ae55-7277d370fcfb 8976163 0 2020-10-27 11:24:46 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 be5d3f09-7c03-4aad-9e87-08ec6c4577cf 0xc0042989e0 0xc0042989e1}] [] [{kube-controller-manager Update v1 2020-10-27 11:24:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be5d3f09-7c03-4aad-9e87-08ec6c4577cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 11:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rrsq8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rrsq8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rrsq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:24:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:24:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:24:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:24:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.94,StartTime:2020-10-27 11:24:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 11:24:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://fca186232ca388f0612d61bcbe293b893350a047952368b8b4dcd464e32edaa7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:24:50.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3152" for this suite. • [SLOW TEST:9.212 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":181,"skipped":2836,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:24:50.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:25:03.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-693" for this suite. • [SLOW TEST:13.151 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":182,"skipped":2841,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:25:03.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:25:19.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8783" for this suite. • [SLOW TEST:16.279 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":183,"skipped":2847,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:25:19.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-6924fb9c-7e21-429f-8b4a-ce477cf8a81c STEP: Creating configMap with name cm-test-opt-upd-03fd57cc-b45b-4868-8f9a-a7c5c4f3974b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6924fb9c-7e21-429f-8b4a-ce477cf8a81c STEP: Updating configmap cm-test-opt-upd-03fd57cc-b45b-4868-8f9a-a7c5c4f3974b STEP: Creating configMap with name cm-test-opt-create-8a6a1f91-daf7-4a07-acc0-818ff5f37016 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:25:30.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8560" for this suite. • [SLOW TEST:10.320 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":2851,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:25:30.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 27 11:25:30.270: INFO: Waiting up to 5m0s for pod "pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8" in namespace "emptydir-280" to be "Succeeded or Failed" Oct 27 11:25:30.294: INFO: Pod "pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.174029ms Oct 27 11:25:32.299: INFO: Pod "pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029048353s Oct 27 11:25:34.304: INFO: Pod "pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033644236s STEP: Saw pod success Oct 27 11:25:34.304: INFO: Pod "pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8" satisfied condition "Succeeded or Failed" Oct 27 11:25:34.307: INFO: Trying to get logs from node kali-worker pod pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8 container test-container: STEP: delete the pod Oct 27 11:25:34.397: INFO: Waiting for pod pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8 to disappear Oct 27 11:25:34.405: INFO: Pod pod-008f7ff5-0d48-4fe3-9755-f00fca124fa8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:25:34.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-280" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":2864,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:25:34.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:25:35.127: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:25:37.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:25:39.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739394735, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:25:42.312: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:25:52.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5515" for this suite. STEP: Destroying namespace "webhook-5515-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":186,"skipped":2885,"failed":0} [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:25:52.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1027 11:25:53.695287 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:26:55.794: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:26:55.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9681" for this suite. • [SLOW TEST:63.258 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":187,"skipped":2885,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:26:55.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5588 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5588 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5588 Oct 27 11:26:55.936: INFO: Found 0 stateful pods, waiting for 1 Oct 27 11:27:05.941: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 27 11:27:05.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:27:06.198: INFO: stderr: "I1027 11:27:06.068995 1908 log.go:181] (0xc000635080) (0xc00037c500) Create stream\nI1027 11:27:06.069046 1908 log.go:181] (0xc000635080) (0xc00037c500) Stream added, broadcasting: 1\nI1027 11:27:06.073371 1908 log.go:181] (0xc000635080) Reply frame received for 1\nI1027 11:27:06.073409 1908 log.go:181] (0xc000635080) (0xc00037cdc0) Create stream\nI1027 11:27:06.073423 1908 log.go:181] (0xc000635080) (0xc00037cdc0) Stream added, broadcasting: 3\nI1027 11:27:06.074368 1908 log.go:181] (0xc000635080) Reply frame received for 3\nI1027 11:27:06.074404 1908 log.go:181] (0xc000635080) (0xc00012df40) Create stream\nI1027 11:27:06.074419 1908 log.go:181] (0xc000635080) (0xc00012df40) Stream added, broadcasting: 5\nI1027 11:27:06.075381 1908 log.go:181] (0xc000635080) Reply frame received for 5\nI1027 11:27:06.159920 1908 log.go:181] (0xc000635080) Data frame received for 5\nI1027 11:27:06.159954 1908 log.go:181] (0xc00012df40) (5) Data frame handling\nI1027 11:27:06.159970 1908 log.go:181] (0xc00012df40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:27:06.189448 1908 log.go:181] (0xc000635080) Data frame received for 5\nI1027 11:27:06.189490 1908 log.go:181] (0xc000635080) Data frame received for 3\nI1027 11:27:06.189522 1908 log.go:181] (0xc00037cdc0) (3) Data frame handling\nI1027 11:27:06.189540 1908 log.go:181] (0xc00037cdc0) (3) Data frame sent\nI1027 11:27:06.189573 1908 log.go:181] (0xc00012df40) (5) Data frame handling\nI1027 11:27:06.189868 1908 log.go:181] (0xc000635080) Data frame received for 3\nI1027 11:27:06.189919 1908 log.go:181] (0xc00037cdc0) (3) Data frame handling\nI1027 11:27:06.191996 1908 log.go:181] (0xc000635080) Data frame received for 1\nI1027 11:27:06.192034 1908 log.go:181] (0xc00037c500) (1) Data frame handling\nI1027 11:27:06.192056 1908 log.go:181] (0xc00037c500) (1) Data frame sent\nI1027 11:27:06.192089 1908 log.go:181] (0xc000635080) (0xc00037c500) Stream removed, broadcasting: 1\nI1027 11:27:06.192114 1908 log.go:181] (0xc000635080) Go away received\nI1027 11:27:06.192683 1908 log.go:181] (0xc000635080) (0xc00037c500) Stream removed, broadcasting: 1\nI1027 11:27:06.192709 1908 log.go:181] (0xc000635080) (0xc00037cdc0) Stream removed, broadcasting: 3\nI1027 11:27:06.192720 1908 log.go:181] (0xc000635080) (0xc00012df40) Stream removed, broadcasting: 5\n" Oct 27 11:27:06.199: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:27:06.199: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:27:06.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 27 11:27:16.208: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:27:16.208: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:27:16.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999916s Oct 27 11:27:17.242: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980944155s Oct 27 11:27:18.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.976713781s Oct 27 11:27:19.251: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971997816s Oct 27 11:27:20.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.967127651s Oct 27 11:27:21.260: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.963533539s Oct 27 11:27:22.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.958978688s Oct 27 11:27:23.269: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.954459242s Oct 27 11:27:24.275: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.949197236s Oct 27 11:27:25.279: INFO: Verifying statefulset ss doesn't scale past 1 for another 943.585764ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5588 Oct 27 11:27:26.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:27:26.528: INFO: stderr: "I1027 11:27:26.418042 1925 log.go:181] (0xc000751080) (0xc0002f7220) Create stream\nI1027 11:27:26.418099 1925 log.go:181] (0xc000751080) (0xc0002f7220) Stream added, broadcasting: 1\nI1027 11:27:26.425530 1925 log.go:181] (0xc000751080) Reply frame received for 1\nI1027 11:27:26.425570 1925 log.go:181] (0xc000751080) (0xc000b50000) Create stream\nI1027 11:27:26.425580 1925 log.go:181] (0xc000751080) (0xc000b50000) Stream added, broadcasting: 3\nI1027 11:27:26.426454 1925 log.go:181] (0xc000751080) Reply frame received for 3\nI1027 11:27:26.426495 1925 log.go:181] (0xc000751080) (0xc0002f6140) Create stream\nI1027 11:27:26.426510 1925 log.go:181] (0xc000751080) (0xc0002f6140) Stream added, broadcasting: 5\nI1027 11:27:26.427545 1925 log.go:181] (0xc000751080) Reply frame received for 5\nI1027 11:27:26.521619 1925 log.go:181] (0xc000751080) Data frame received for 3\nI1027 11:27:26.521656 1925 log.go:181] (0xc000b50000) (3) Data frame handling\nI1027 11:27:26.521673 1925 log.go:181] (0xc000b50000) (3) Data frame sent\nI1027 11:27:26.521682 1925 log.go:181] (0xc000751080) Data frame received for 3\nI1027 11:27:26.521691 1925 log.go:181] (0xc000b50000) (3) Data frame handling\nI1027 11:27:26.521747 1925 log.go:181] (0xc000751080) Data frame received for 5\nI1027 11:27:26.521760 1925 log.go:181] (0xc0002f6140) (5) Data frame handling\nI1027 11:27:26.521770 1925 log.go:181] (0xc0002f6140) (5) Data frame sent\nI1027 11:27:26.521779 1925 log.go:181] (0xc000751080) Data frame received for 5\nI1027 11:27:26.521785 1925 log.go:181] (0xc0002f6140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:27:26.523424 1925 log.go:181] (0xc000751080) Data frame received for 1\nI1027 11:27:26.523604 1925 log.go:181] (0xc0002f7220) (1) Data frame handling\nI1027 11:27:26.523643 1925 log.go:181] (0xc0002f7220) (1) Data frame sent\nI1027 11:27:26.523663 1925 log.go:181] (0xc000751080) (0xc0002f7220) Stream removed, broadcasting: 1\nI1027 11:27:26.523701 1925 log.go:181] (0xc000751080) Go away received\nI1027 11:27:26.524106 1925 log.go:181] (0xc000751080) (0xc0002f7220) Stream removed, broadcasting: 1\nI1027 11:27:26.524120 1925 log.go:181] (0xc000751080) (0xc000b50000) Stream removed, broadcasting: 3\nI1027 11:27:26.524133 1925 log.go:181] (0xc000751080) (0xc0002f6140) Stream removed, broadcasting: 5\n" Oct 27 11:27:26.528: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:27:26.528: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:27:26.533: INFO: Found 1 stateful pods, waiting for 3 Oct 27 11:27:36.536: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:27:36.536: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:27:36.536: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 27 11:27:46.590: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:27:46.590: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:27:46.590: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 27 11:27:46.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:27:47.038: INFO: stderr: "I1027 11:27:46.949955 1943 log.go:181] (0xc000e8d130) (0xc000b968c0) Create stream\nI1027 11:27:46.949995 1943 log.go:181] (0xc000e8d130) (0xc000b968c0) Stream added, broadcasting: 1\nI1027 11:27:46.952895 1943 log.go:181] (0xc000e8d130) Reply frame received for 1\nI1027 11:27:46.952926 1943 log.go:181] (0xc000e8d130) (0xc000afe0a0) Create stream\nI1027 11:27:46.952934 1943 log.go:181] (0xc000e8d130) (0xc000afe0a0) Stream added, broadcasting: 3\nI1027 11:27:46.953500 1943 log.go:181] (0xc000e8d130) Reply frame received for 3\nI1027 11:27:46.953517 1943 log.go:181] (0xc000e8d130) (0xc000b96000) Create stream\nI1027 11:27:46.953522 1943 log.go:181] (0xc000e8d130) (0xc000b96000) Stream added, broadcasting: 5\nI1027 11:27:46.953977 1943 log.go:181] (0xc000e8d130) Reply frame received for 5\nI1027 11:27:47.030654 1943 log.go:181] (0xc000e8d130) Data frame received for 5\nI1027 11:27:47.030681 1943 log.go:181] (0xc000b96000) (5) Data frame handling\nI1027 11:27:47.030694 1943 log.go:181] (0xc000b96000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:27:47.032915 1943 log.go:181] (0xc000e8d130) Data frame received for 3\nI1027 11:27:47.032938 1943 log.go:181] (0xc000afe0a0) (3) Data frame handling\nI1027 11:27:47.032950 1943 log.go:181] (0xc000afe0a0) (3) Data frame sent\nI1027 11:27:47.033098 1943 log.go:181] (0xc000e8d130) Data frame received for 3\nI1027 11:27:47.033112 1943 log.go:181] (0xc000afe0a0) (3) Data frame handling\nI1027 11:27:47.033271 1943 log.go:181] (0xc000e8d130) Data frame received for 5\nI1027 11:27:47.033280 1943 log.go:181] (0xc000b96000) (5) Data frame handling\nI1027 11:27:47.034599 1943 log.go:181] (0xc000e8d130) Data frame received for 1\nI1027 11:27:47.034612 1943 log.go:181] (0xc000b968c0) (1) Data frame handling\nI1027 11:27:47.034627 1943 log.go:181] (0xc000b968c0) (1) Data frame sent\nI1027 11:27:47.034640 1943 log.go:181] (0xc000e8d130) (0xc000b968c0) Stream removed, broadcasting: 1\nI1027 11:27:47.034659 1943 log.go:181] (0xc000e8d130) Go away received\nI1027 11:27:47.034877 1943 log.go:181] (0xc000e8d130) (0xc000b968c0) Stream removed, broadcasting: 1\nI1027 11:27:47.034892 1943 log.go:181] (0xc000e8d130) (0xc000afe0a0) Stream removed, broadcasting: 3\nI1027 11:27:47.034901 1943 log.go:181] (0xc000e8d130) (0xc000b96000) Stream removed, broadcasting: 5\n" Oct 27 11:27:47.038: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:27:47.038: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:27:47.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:27:47.269: INFO: stderr: "I1027 11:27:47.159888 1961 log.go:181] (0xc00003a0b0) (0xc000bc2140) Create stream\nI1027 11:27:47.159940 1961 log.go:181] (0xc00003a0b0) (0xc000bc2140) Stream added, broadcasting: 1\nI1027 11:27:47.161440 1961 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1027 11:27:47.161472 1961 log.go:181] (0xc00003a0b0) (0xc0005ec460) Create stream\nI1027 11:27:47.161480 1961 log.go:181] (0xc00003a0b0) (0xc0005ec460) Stream added, broadcasting: 3\nI1027 11:27:47.162131 1961 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1027 11:27:47.162159 1961 log.go:181] (0xc00003a0b0) (0xc00043e000) Create stream\nI1027 11:27:47.162171 1961 log.go:181] (0xc00003a0b0) (0xc00043e000) Stream added, broadcasting: 5\nI1027 11:27:47.162959 1961 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1027 11:27:47.223390 1961 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 11:27:47.223415 1961 log.go:181] (0xc00043e000) (5) Data frame handling\nI1027 11:27:47.223439 1961 log.go:181] (0xc00043e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:27:47.263227 1961 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 11:27:47.263245 1961 log.go:181] (0xc0005ec460) (3) Data frame handling\nI1027 11:27:47.263252 1961 log.go:181] (0xc0005ec460) (3) Data frame sent\nI1027 11:27:47.263257 1961 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 11:27:47.263263 1961 log.go:181] (0xc0005ec460) (3) Data frame handling\nI1027 11:27:47.263312 1961 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 11:27:47.263328 1961 log.go:181] (0xc00043e000) (5) Data frame handling\nI1027 11:27:47.264758 1961 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1027 11:27:47.264786 1961 log.go:181] (0xc000bc2140) (1) Data frame handling\nI1027 11:27:47.264800 1961 log.go:181] (0xc000bc2140) (1) Data frame sent\nI1027 11:27:47.264814 1961 log.go:181] (0xc00003a0b0) (0xc000bc2140) Stream removed, broadcasting: 1\nI1027 11:27:47.264830 1961 log.go:181] (0xc00003a0b0) Go away received\nI1027 11:27:47.265124 1961 log.go:181] (0xc00003a0b0) (0xc000bc2140) Stream removed, broadcasting: 1\nI1027 11:27:47.265136 1961 log.go:181] (0xc00003a0b0) (0xc0005ec460) Stream removed, broadcasting: 3\nI1027 11:27:47.265144 1961 log.go:181] (0xc00003a0b0) (0xc00043e000) Stream removed, broadcasting: 5\n" Oct 27 11:27:47.269: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:27:47.269: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:27:47.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:27:47.508: INFO: stderr: "I1027 11:27:47.392636 1978 log.go:181] (0xc0006316b0) (0xc0006288c0) Create stream\nI1027 11:27:47.392679 1978 log.go:181] (0xc0006316b0) (0xc0006288c0) Stream added, broadcasting: 1\nI1027 11:27:47.395962 1978 log.go:181] (0xc0006316b0) Reply frame received for 1\nI1027 11:27:47.396000 1978 log.go:181] (0xc0006316b0) (0xc0007d41e0) Create stream\nI1027 11:27:47.396016 1978 log.go:181] (0xc0006316b0) (0xc0007d41e0) Stream added, broadcasting: 3\nI1027 11:27:47.396683 1978 log.go:181] (0xc0006316b0) Reply frame received for 3\nI1027 11:27:47.396709 1978 log.go:181] (0xc0006316b0) (0xc0007d4280) Create stream\nI1027 11:27:47.396719 1978 log.go:181] (0xc0006316b0) (0xc0007d4280) Stream added, broadcasting: 5\nI1027 11:27:47.397311 1978 log.go:181] (0xc0006316b0) Reply frame received for 5\nI1027 11:27:47.461931 1978 log.go:181] (0xc0006316b0) Data frame received for 5\nI1027 11:27:47.461957 1978 log.go:181] (0xc0007d4280) (5) Data frame handling\nI1027 11:27:47.461975 1978 log.go:181] (0xc0007d4280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:27:47.501945 1978 log.go:181] (0xc0006316b0) Data frame received for 3\nI1027 11:27:47.501980 1978 log.go:181] (0xc0007d41e0) (3) Data frame handling\nI1027 11:27:47.502007 1978 log.go:181] (0xc0007d41e0) (3) Data frame sent\nI1027 11:27:47.502022 1978 log.go:181] (0xc0006316b0) Data frame received for 3\nI1027 11:27:47.502036 1978 log.go:181] (0xc0007d41e0) (3) Data frame handling\nI1027 11:27:47.502229 1978 log.go:181] (0xc0006316b0) Data frame received for 5\nI1027 11:27:47.502257 1978 log.go:181] (0xc0007d4280) (5) Data frame handling\nI1027 11:27:47.503289 1978 log.go:181] (0xc0006316b0) Data frame received for 1\nI1027 11:27:47.503304 1978 log.go:181] (0xc0006288c0) (1) Data frame handling\nI1027 11:27:47.503311 1978 log.go:181] (0xc0006288c0) (1) Data frame sent\nI1027 11:27:47.503319 1978 log.go:181] (0xc0006316b0) (0xc0006288c0) Stream removed, broadcasting: 1\nI1027 11:27:47.503328 1978 log.go:181] (0xc0006316b0) Go away received\nI1027 11:27:47.503857 1978 log.go:181] (0xc0006316b0) (0xc0006288c0) Stream removed, broadcasting: 1\nI1027 11:27:47.503882 1978 log.go:181] (0xc0006316b0) (0xc0007d41e0) Stream removed, broadcasting: 3\nI1027 11:27:47.503893 1978 log.go:181] (0xc0006316b0) (0xc0007d4280) Stream removed, broadcasting: 5\n" Oct 27 11:27:47.508: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:27:47.508: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:27:47.508: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:27:47.511: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 27 11:27:57.519: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:27:57.519: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:27:57.519: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:27:57.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999962s Oct 27 11:27:58.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.97991385s Oct 27 11:27:59.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974048453s Oct 27 11:28:00.562: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968539362s Oct 27 11:28:01.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.964133135s Oct 27 11:28:02.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.959076185s Oct 27 11:28:03.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953636779s Oct 27 11:28:04.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.946763982s Oct 27 11:28:05.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941503191s Oct 27 11:28:06.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.743406ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5588 Oct 27 11:28:07.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:28:07.836: INFO: stderr: "I1027 11:28:07.745337 1996 log.go:181] (0xc00092f550) (0xc0007ba8c0) Create stream\nI1027 11:28:07.745378 1996 log.go:181] (0xc00092f550) (0xc0007ba8c0) Stream added, broadcasting: 1\nI1027 11:28:07.748972 1996 log.go:181] (0xc00092f550) Reply frame received for 1\nI1027 11:28:07.749005 1996 log.go:181] (0xc00092f550) (0xc000c44000) Create stream\nI1027 11:28:07.749018 1996 log.go:181] (0xc00092f550) (0xc000c44000) Stream added, broadcasting: 3\nI1027 11:28:07.749615 1996 log.go:181] (0xc00092f550) Reply frame received for 3\nI1027 11:28:07.749644 1996 log.go:181] (0xc00092f550) (0xc00022a0a0) Create stream\nI1027 11:28:07.749674 1996 log.go:181] (0xc00092f550) (0xc00022a0a0) Stream added, broadcasting: 5\nI1027 11:28:07.750418 1996 log.go:181] (0xc00092f550) Reply frame received for 5\nI1027 11:28:07.830045 1996 log.go:181] (0xc00092f550) Data frame received for 5\nI1027 11:28:07.830066 1996 log.go:181] (0xc00022a0a0) (5) Data frame handling\nI1027 11:28:07.830076 1996 log.go:181] (0xc00022a0a0) (5) Data frame sent\nI1027 11:28:07.830083 1996 log.go:181] (0xc00092f550) Data frame received for 5\nI1027 11:28:07.830089 1996 log.go:181] (0xc00022a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:28:07.830138 1996 log.go:181] (0xc00092f550) Data frame received for 3\nI1027 11:28:07.830151 1996 log.go:181] (0xc000c44000) (3) Data frame handling\nI1027 11:28:07.830168 1996 log.go:181] (0xc000c44000) (3) Data frame sent\nI1027 11:28:07.830186 1996 log.go:181] (0xc00092f550) Data frame received for 3\nI1027 11:28:07.830192 1996 log.go:181] (0xc000c44000) (3) Data frame handling\nI1027 11:28:07.831497 1996 log.go:181] (0xc00092f550) Data frame received for 1\nI1027 11:28:07.831512 1996 log.go:181] (0xc0007ba8c0) (1) Data frame handling\nI1027 11:28:07.831520 1996 log.go:181] (0xc0007ba8c0) (1) Data frame sent\nI1027 11:28:07.831535 1996 log.go:181] (0xc00092f550) (0xc0007ba8c0) Stream removed, broadcasting: 1\nI1027 11:28:07.831791 1996 log.go:181] (0xc00092f550) (0xc0007ba8c0) Stream removed, broadcasting: 1\nI1027 11:28:07.831805 1996 log.go:181] (0xc00092f550) (0xc000c44000) Stream removed, broadcasting: 3\nI1027 11:28:07.831827 1996 log.go:181] (0xc00092f550) Go away received\nI1027 11:28:07.831871 1996 log.go:181] (0xc00092f550) (0xc00022a0a0) Stream removed, broadcasting: 5\n" Oct 27 11:28:07.836: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:28:07.836: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:28:07.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:28:08.048: INFO: stderr: "I1027 11:28:07.982033 2013 log.go:181] (0xc000f1d290) (0xc000f2a820) Create stream\nI1027 11:28:07.982084 2013 log.go:181] (0xc000f1d290) (0xc000f2a820) Stream added, broadcasting: 1\nI1027 11:28:07.986059 2013 log.go:181] (0xc000f1d290) Reply frame received for 1\nI1027 11:28:07.986117 2013 log.go:181] (0xc000f1d290) (0xc0004cc6e0) Create stream\nI1027 11:28:07.986140 2013 log.go:181] (0xc000f1d290) (0xc0004cc6e0) Stream added, broadcasting: 3\nI1027 11:28:07.986797 2013 log.go:181] (0xc000f1d290) Reply frame received for 3\nI1027 11:28:07.986814 2013 log.go:181] (0xc000f1d290) (0xc0004cc780) Create stream\nI1027 11:28:07.986821 2013 log.go:181] (0xc000f1d290) (0xc0004cc780) Stream added, broadcasting: 5\nI1027 11:28:07.987547 2013 log.go:181] (0xc000f1d290) Reply frame received for 5\nI1027 11:28:08.039690 2013 log.go:181] (0xc000f1d290) Data frame received for 3\nI1027 11:28:08.039742 2013 log.go:181] (0xc0004cc6e0) (3) Data frame handling\nI1027 11:28:08.039776 2013 log.go:181] (0xc000f1d290) Data frame received for 5\nI1027 11:28:08.039816 2013 log.go:181] (0xc0004cc780) (5) Data frame handling\nI1027 11:28:08.039836 2013 log.go:181] (0xc0004cc780) (5) Data frame sent\nI1027 11:28:08.039854 2013 log.go:181] (0xc000f1d290) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:28:08.039870 2013 log.go:181] (0xc0004cc780) (5) Data frame handling\nI1027 11:28:08.039891 2013 log.go:181] (0xc0004cc6e0) (3) Data frame sent\nI1027 11:28:08.039915 2013 log.go:181] (0xc000f1d290) Data frame received for 3\nI1027 11:28:08.039927 2013 log.go:181] (0xc0004cc6e0) (3) Data frame handling\nI1027 11:28:08.040953 2013 log.go:181] (0xc000f1d290) Data frame received for 1\nI1027 11:28:08.040967 2013 log.go:181] (0xc000f2a820) (1) Data frame handling\nI1027 11:28:08.040981 2013 log.go:181] (0xc000f2a820) (1) Data frame sent\nI1027 11:28:08.040990 2013 log.go:181] (0xc000f1d290) (0xc000f2a820) Stream removed, broadcasting: 1\nI1027 11:28:08.041255 2013 log.go:181] (0xc000f1d290) (0xc000f2a820) Stream removed, broadcasting: 1\nI1027 11:28:08.041295 2013 log.go:181] (0xc000f1d290) Go away received\nI1027 11:28:08.041332 2013 log.go:181] (0xc000f1d290) (0xc0004cc6e0) Stream removed, broadcasting: 3\nI1027 11:28:08.041352 2013 log.go:181] (0xc000f1d290) (0xc0004cc780) Stream removed, broadcasting: 5\n" Oct 27 11:28:08.049: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:28:08.049: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:28:08.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5588 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:28:08.226: INFO: stderr: "I1027 11:28:08.169197 2031 log.go:181] (0xc00003a420) (0xc000a6f9a0) Create stream\nI1027 11:28:08.169258 2031 log.go:181] (0xc00003a420) (0xc000a6f9a0) Stream added, broadcasting: 1\nI1027 11:28:08.170460 2031 log.go:181] (0xc00003a420) Reply frame received for 1\nI1027 11:28:08.170488 2031 log.go:181] (0xc00003a420) (0xc000a6fa40) Create stream\nI1027 11:28:08.170497 2031 log.go:181] (0xc00003a420) (0xc000a6fa40) Stream added, broadcasting: 3\nI1027 11:28:08.171056 2031 log.go:181] (0xc00003a420) Reply frame received for 3\nI1027 11:28:08.171086 2031 log.go:181] (0xc00003a420) (0xc00076a0a0) Create stream\nI1027 11:28:08.171098 2031 log.go:181] (0xc00003a420) (0xc00076a0a0) Stream added, broadcasting: 5\nI1027 11:28:08.171635 2031 log.go:181] (0xc00003a420) Reply frame received for 5\nI1027 11:28:08.219751 2031 log.go:181] (0xc00003a420) Data frame received for 3\nI1027 11:28:08.219774 2031 log.go:181] (0xc000a6fa40) (3) Data frame handling\nI1027 11:28:08.219780 2031 log.go:181] (0xc000a6fa40) (3) Data frame sent\nI1027 11:28:08.219784 2031 log.go:181] (0xc00003a420) Data frame received for 3\nI1027 11:28:08.219788 2031 log.go:181] (0xc000a6fa40) (3) Data frame handling\nI1027 11:28:08.219809 2031 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:28:08.219816 2031 log.go:181] (0xc00076a0a0) (5) Data frame handling\nI1027 11:28:08.219824 2031 log.go:181] (0xc00076a0a0) (5) Data frame sent\nI1027 11:28:08.219830 2031 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:28:08.219834 2031 log.go:181] (0xc00076a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:28:08.220637 2031 log.go:181] (0xc00003a420) Data frame received for 1\nI1027 11:28:08.220654 2031 log.go:181] (0xc000a6f9a0) (1) Data frame handling\nI1027 11:28:08.220662 2031 log.go:181] (0xc000a6f9a0) (1) Data frame sent\nI1027 11:28:08.220670 2031 log.go:181] (0xc00003a420) (0xc000a6f9a0) Stream removed, broadcasting: 1\nI1027 11:28:08.220679 2031 log.go:181] (0xc00003a420) Go away received\nI1027 11:28:08.220983 2031 log.go:181] (0xc00003a420) (0xc000a6f9a0) Stream removed, broadcasting: 1\nI1027 11:28:08.220996 2031 log.go:181] (0xc00003a420) (0xc000a6fa40) Stream removed, broadcasting: 3\nI1027 11:28:08.221001 2031 log.go:181] (0xc00003a420) (0xc00076a0a0) Stream removed, broadcasting: 5\n" Oct 27 11:28:08.226: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:28:08.226: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:28:08.226: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 11:28:38.240: INFO: Deleting all statefulset in ns statefulset-5588 Oct 27 11:28:38.244: INFO: Scaling statefulset ss to 0 Oct 27 11:28:38.252: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:28:38.253: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:28:38.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5588" for this suite. • [SLOW TEST:102.543 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":188,"skipped":2903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:28:38.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:28:49.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-640" for this suite. • [SLOW TEST:11.216 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":189,"skipped":2946,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:28:49.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-700 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 27 11:28:49.617: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 27 11:28:49.700: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:28:51.704: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:28:53.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:28:55.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:28:57.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:28:59.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:01.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:03.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:05.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:07.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:09.705: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:29:11.705: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 27 11:29:11.712: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 27 11:29:15.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.139:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-700 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:29:15.787: INFO: >>> kubeConfig: /root/.kube/config I1027 11:29:15.824311 7 log.go:181] (0xc0005ee840) (0xc001db3360) Create stream I1027 11:29:15.824341 7 log.go:181] (0xc0005ee840) (0xc001db3360) Stream added, broadcasting: 1 I1027 11:29:15.826299 7 log.go:181] (0xc0005ee840) Reply frame received for 1 I1027 11:29:15.826342 7 log.go:181] (0xc0005ee840) (0xc00128a8c0) Create stream I1027 11:29:15.826356 7 log.go:181] (0xc0005ee840) (0xc00128a8c0) Stream added, broadcasting: 3 I1027 11:29:15.827344 7 log.go:181] (0xc0005ee840) Reply frame received for 3 I1027 11:29:15.827394 7 log.go:181] (0xc0005ee840) (0xc00128aa00) Create stream I1027 11:29:15.827408 7 log.go:181] (0xc0005ee840) (0xc00128aa00) Stream added, broadcasting: 5 I1027 11:29:15.828355 7 log.go:181] (0xc0005ee840) Reply frame received for 5 I1027 11:29:15.899457 7 log.go:181] (0xc0005ee840) Data frame received for 3 I1027 11:29:15.899512 7 log.go:181] (0xc00128a8c0) (3) Data frame handling I1027 11:29:15.899537 7 log.go:181] (0xc00128a8c0) (3) Data frame sent I1027 11:29:15.899555 7 log.go:181] (0xc0005ee840) Data frame received for 3 I1027 11:29:15.899570 7 log.go:181] (0xc00128a8c0) (3) Data frame handling I1027 11:29:15.899600 7 log.go:181] (0xc0005ee840) Data frame received for 5 I1027 11:29:15.899618 7 log.go:181] (0xc00128aa00) (5) Data frame handling I1027 11:29:15.901286 7 log.go:181] (0xc0005ee840) Data frame received for 1 I1027 11:29:15.901310 7 log.go:181] (0xc001db3360) (1) Data frame handling I1027 11:29:15.901329 7 log.go:181] (0xc001db3360) (1) Data frame sent I1027 11:29:15.901359 7 log.go:181] (0xc0005ee840) (0xc001db3360) Stream removed, broadcasting: 1 I1027 11:29:15.901389 7 log.go:181] (0xc0005ee840) Go away received I1027 11:29:15.901550 7 log.go:181] (0xc0005ee840) (0xc001db3360) Stream removed, broadcasting: 1 I1027 11:29:15.901598 7 log.go:181] (0xc0005ee840) (0xc00128a8c0) Stream removed, broadcasting: 3 I1027 11:29:15.901625 7 log.go:181] (0xc0005ee840) (0xc00128aa00) Stream removed, broadcasting: 5 Oct 27 11:29:15.901: INFO: Found all expected endpoints: [netserver-0] Oct 27 11:29:15.905: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.98:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-700 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:29:15.905: INFO: >>> kubeConfig: /root/.kube/config I1027 11:29:15.942121 7 log.go:181] (0xc00052def0) (0xc001474fa0) Create stream I1027 11:29:15.942143 7 log.go:181] (0xc00052def0) (0xc001474fa0) Stream added, broadcasting: 1 I1027 11:29:15.946393 7 log.go:181] (0xc00052def0) Reply frame received for 1 I1027 11:29:15.946459 7 log.go:181] (0xc00052def0) (0xc00128ab40) Create stream I1027 11:29:15.946487 7 log.go:181] (0xc00052def0) (0xc00128ab40) Stream added, broadcasting: 3 I1027 11:29:15.948109 7 log.go:181] (0xc00052def0) Reply frame received for 3 I1027 11:29:15.948174 7 log.go:181] (0xc00052def0) (0xc004056460) Create stream I1027 11:29:15.948198 7 log.go:181] (0xc00052def0) (0xc004056460) Stream added, broadcasting: 5 I1027 11:29:15.949540 7 log.go:181] (0xc00052def0) Reply frame received for 5 I1027 11:29:16.016959 7 log.go:181] (0xc00052def0) Data frame received for 3 I1027 11:29:16.016995 7 log.go:181] (0xc00128ab40) (3) Data frame handling I1027 11:29:16.017015 7 log.go:181] (0xc00128ab40) (3) Data frame sent I1027 11:29:16.017039 7 log.go:181] (0xc00052def0) Data frame received for 3 I1027 11:29:16.017069 7 log.go:181] (0xc00128ab40) (3) Data frame handling I1027 11:29:16.017368 7 log.go:181] (0xc00052def0) Data frame received for 5 I1027 11:29:16.017396 7 log.go:181] (0xc004056460) (5) Data frame handling I1027 11:29:16.018803 7 log.go:181] (0xc00052def0) Data frame received for 1 I1027 11:29:16.018834 7 log.go:181] (0xc001474fa0) (1) Data frame handling I1027 11:29:16.018854 7 log.go:181] (0xc001474fa0) (1) Data frame sent I1027 11:29:16.018878 7 log.go:181] (0xc00052def0) (0xc001474fa0) Stream removed, broadcasting: 1 I1027 11:29:16.018905 7 log.go:181] (0xc00052def0) Go away received I1027 11:29:16.019019 7 log.go:181] (0xc00052def0) (0xc001474fa0) Stream removed, broadcasting: 1 I1027 11:29:16.019059 7 log.go:181] (0xc00052def0) (0xc00128ab40) Stream removed, broadcasting: 3 I1027 11:29:16.019084 7 log.go:181] (0xc00052def0) (0xc004056460) Stream removed, broadcasting: 5 Oct 27 11:29:16.019: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:29:16.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-700" for this suite. • [SLOW TEST:26.464 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2949,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:29:16.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1027 11:29:26.172045 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:30:28.197: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:30:28.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4293" for this suite. • [SLOW TEST:72.178 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":191,"skipped":2952,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:30:28.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-eaddd4c3-b476-43d0-b24d-23d41541681d in namespace container-probe-9154 Oct 27 11:30:32.367: INFO: Started pod test-webserver-eaddd4c3-b476-43d0-b24d-23d41541681d in namespace container-probe-9154 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 11:30:32.394: INFO: Initial restart count of pod test-webserver-eaddd4c3-b476-43d0-b24d-23d41541681d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:34:33.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9154" for this suite. • [SLOW TEST:244.971 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":2980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:34:33.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8510 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8510 I1027 11:34:33.769563 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8510, replica count: 2 I1027 11:34:36.820016 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:34:39.820245 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:34:39.820: INFO: Creating new exec pod Oct 27 11:34:44.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8510 execpodtp9cw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 27 11:34:48.155: INFO: stderr: "I1027 11:34:48.061580 2045 log.go:181] (0xc000818d10) (0xc000b80460) Create stream\nI1027 11:34:48.061632 2045 log.go:181] (0xc000818d10) (0xc000b80460) Stream added, broadcasting: 1\nI1027 11:34:48.063340 2045 log.go:181] (0xc000818d10) Reply frame received for 1\nI1027 11:34:48.063378 2045 log.go:181] (0xc000818d10) (0xc000b80500) Create stream\nI1027 11:34:48.063390 2045 log.go:181] (0xc000818d10) (0xc000b80500) Stream added, broadcasting: 3\nI1027 11:34:48.064280 2045 log.go:181] (0xc000818d10) Reply frame received for 3\nI1027 11:34:48.064300 2045 log.go:181] (0xc000818d10) (0xc0007481e0) Create stream\nI1027 11:34:48.064307 2045 log.go:181] (0xc000818d10) (0xc0007481e0) Stream added, broadcasting: 5\nI1027 11:34:48.065246 2045 log.go:181] (0xc000818d10) Reply frame received for 5\nI1027 11:34:48.146194 2045 log.go:181] (0xc000818d10) Data frame received for 5\nI1027 11:34:48.146231 2045 log.go:181] (0xc0007481e0) (5) Data frame handling\nI1027 11:34:48.146258 2045 log.go:181] (0xc0007481e0) (5) Data frame sent\nI1027 11:34:48.146274 2045 log.go:181] (0xc000818d10) Data frame received for 5\nI1027 11:34:48.146283 2045 log.go:181] (0xc0007481e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1027 11:34:48.146306 2045 log.go:181] (0xc0007481e0) (5) Data frame sent\nI1027 11:34:48.146641 2045 log.go:181] (0xc000818d10) Data frame received for 3\nI1027 11:34:48.146673 2045 log.go:181] (0xc000b80500) (3) Data frame handling\nI1027 11:34:48.146711 2045 log.go:181] (0xc000818d10) Data frame received for 5\nI1027 11:34:48.146746 2045 log.go:181] (0xc0007481e0) (5) Data frame handling\nI1027 11:34:48.148443 2045 log.go:181] (0xc000818d10) Data frame received for 1\nI1027 11:34:48.148493 2045 log.go:181] (0xc000b80460) (1) Data frame handling\nI1027 11:34:48.148529 2045 log.go:181] (0xc000b80460) (1) Data frame sent\nI1027 11:34:48.148553 2045 log.go:181] (0xc000818d10) (0xc000b80460) Stream removed, broadcasting: 1\nI1027 11:34:48.148587 2045 log.go:181] (0xc000818d10) Go away received\nI1027 11:34:48.149085 2045 log.go:181] (0xc000818d10) (0xc000b80460) Stream removed, broadcasting: 1\nI1027 11:34:48.149102 2045 log.go:181] (0xc000818d10) (0xc000b80500) Stream removed, broadcasting: 3\nI1027 11:34:48.149109 2045 log.go:181] (0xc000818d10) (0xc0007481e0) Stream removed, broadcasting: 5\n" Oct 27 11:34:48.155: INFO: stdout: "" Oct 27 11:34:48.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8510 execpodtp9cw -- /bin/sh -x -c nc -zv -t -w 2 10.100.216.236 80' Oct 27 11:34:48.368: INFO: stderr: "I1027 11:34:48.278787 2063 log.go:181] (0xc000848e70) (0xc000bb08c0) Create stream\nI1027 11:34:48.278860 2063 log.go:181] (0xc000848e70) (0xc000bb08c0) Stream added, broadcasting: 1\nI1027 11:34:48.280938 2063 log.go:181] (0xc000848e70) Reply frame received for 1\nI1027 11:34:48.280996 2063 log.go:181] (0xc000848e70) (0xc000b30000) Create stream\nI1027 11:34:48.281010 2063 log.go:181] (0xc000848e70) (0xc000b30000) Stream added, broadcasting: 3\nI1027 11:34:48.282099 2063 log.go:181] (0xc000848e70) Reply frame received for 3\nI1027 11:34:48.282138 2063 log.go:181] (0xc000848e70) (0xc000b300a0) Create stream\nI1027 11:34:48.282148 2063 log.go:181] (0xc000848e70) (0xc000b300a0) Stream added, broadcasting: 5\nI1027 11:34:48.283073 2063 log.go:181] (0xc000848e70) Reply frame received for 5\nI1027 11:34:48.361963 2063 log.go:181] (0xc000848e70) Data frame received for 3\nI1027 11:34:48.362017 2063 log.go:181] (0xc000848e70) Data frame received for 5\nI1027 11:34:48.362072 2063 log.go:181] (0xc000b300a0) (5) Data frame handling\nI1027 11:34:48.362090 2063 log.go:181] (0xc000b300a0) (5) Data frame sent\nI1027 11:34:48.362100 2063 log.go:181] (0xc000848e70) Data frame received for 5\nI1027 11:34:48.362108 2063 log.go:181] (0xc000b300a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.216.236 80\nConnection to 10.100.216.236 80 port [tcp/http] succeeded!\nI1027 11:34:48.362145 2063 log.go:181] (0xc000b30000) (3) Data frame handling\nI1027 11:34:48.363158 2063 log.go:181] (0xc000848e70) Data frame received for 1\nI1027 11:34:48.363175 2063 log.go:181] (0xc000bb08c0) (1) Data frame handling\nI1027 11:34:48.363185 2063 log.go:181] (0xc000bb08c0) (1) Data frame sent\nI1027 11:34:48.363194 2063 log.go:181] (0xc000848e70) (0xc000bb08c0) Stream removed, broadcasting: 1\nI1027 11:34:48.363250 2063 log.go:181] (0xc000848e70) Go away received\nI1027 11:34:48.363569 2063 log.go:181] (0xc000848e70) (0xc000bb08c0) Stream removed, broadcasting: 1\nI1027 11:34:48.363585 2063 log.go:181] (0xc000848e70) (0xc000b30000) Stream removed, broadcasting: 3\nI1027 11:34:48.363592 2063 log.go:181] (0xc000848e70) (0xc000b300a0) Stream removed, broadcasting: 5\n" Oct 27 11:34:48.368: INFO: stdout: "" Oct 27 11:34:48.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8510 execpodtp9cw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30609' Oct 27 11:34:48.578: INFO: stderr: "I1027 11:34:48.499708 2081 log.go:181] (0xc000e95340) (0xc000e90a00) Create stream\nI1027 11:34:48.499768 2081 log.go:181] (0xc000e95340) (0xc000e90a00) Stream added, broadcasting: 1\nI1027 11:34:48.503902 2081 log.go:181] (0xc000e95340) Reply frame received for 1\nI1027 11:34:48.504139 2081 log.go:181] (0xc000e95340) (0xc000e90aa0) Create stream\nI1027 11:34:48.504176 2081 log.go:181] (0xc000e95340) (0xc000e90aa0) Stream added, broadcasting: 3\nI1027 11:34:48.506623 2081 log.go:181] (0xc000e95340) Reply frame received for 3\nI1027 11:34:48.506671 2081 log.go:181] (0xc000e95340) (0xc000e90000) Create stream\nI1027 11:34:48.506685 2081 log.go:181] (0xc000e95340) (0xc000e90000) Stream added, broadcasting: 5\nI1027 11:34:48.508540 2081 log.go:181] (0xc000e95340) Reply frame received for 5\nI1027 11:34:48.571149 2081 log.go:181] (0xc000e95340) Data frame received for 5\nI1027 11:34:48.571211 2081 log.go:181] (0xc000e90000) (5) Data frame handling\nI1027 11:34:48.571235 2081 log.go:181] (0xc000e90000) (5) Data frame sent\nI1027 11:34:48.571246 2081 log.go:181] (0xc000e95340) Data frame received for 5\nI1027 11:34:48.571253 2081 log.go:181] (0xc000e90000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30609\nConnection to 172.18.0.12 30609 port [tcp/30609] succeeded!\nI1027 11:34:48.571311 2081 log.go:181] (0xc000e95340) Data frame received for 3\nI1027 11:34:48.571351 2081 log.go:181] (0xc000e90aa0) (3) Data frame handling\nI1027 11:34:48.573192 2081 log.go:181] (0xc000e95340) Data frame received for 1\nI1027 11:34:48.573213 2081 log.go:181] (0xc000e90a00) (1) Data frame handling\nI1027 11:34:48.573225 2081 log.go:181] (0xc000e90a00) (1) Data frame sent\nI1027 11:34:48.573242 2081 log.go:181] (0xc000e95340) (0xc000e90a00) Stream removed, broadcasting: 1\nI1027 11:34:48.573260 2081 log.go:181] (0xc000e95340) Go away received\nI1027 11:34:48.573705 2081 log.go:181] (0xc000e95340) (0xc000e90a00) Stream removed, broadcasting: 1\nI1027 11:34:48.573735 2081 log.go:181] (0xc000e95340) (0xc000e90aa0) Stream removed, broadcasting: 3\nI1027 11:34:48.573752 2081 log.go:181] (0xc000e95340) (0xc000e90000) Stream removed, broadcasting: 5\n" Oct 27 11:34:48.578: INFO: stdout: "" Oct 27 11:34:48.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8510 execpodtp9cw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30609' Oct 27 11:34:48.812: INFO: stderr: "I1027 11:34:48.727159 2100 log.go:181] (0xc00073d1e0) (0xc0008310e0) Create stream\nI1027 11:34:48.727237 2100 log.go:181] (0xc00073d1e0) (0xc0008310e0) Stream added, broadcasting: 1\nI1027 11:34:48.730741 2100 log.go:181] (0xc00073d1e0) Reply frame received for 1\nI1027 11:34:48.730805 2100 log.go:181] (0xc00073d1e0) (0xc000842fa0) Create stream\nI1027 11:34:48.730828 2100 log.go:181] (0xc00073d1e0) (0xc000842fa0) Stream added, broadcasting: 3\nI1027 11:34:48.732375 2100 log.go:181] (0xc00073d1e0) Reply frame received for 3\nI1027 11:34:48.732447 2100 log.go:181] (0xc00073d1e0) (0xc00085e000) Create stream\nI1027 11:34:48.732466 2100 log.go:181] (0xc00073d1e0) (0xc00085e000) Stream added, broadcasting: 5\nI1027 11:34:48.737351 2100 log.go:181] (0xc00073d1e0) Reply frame received for 5\nI1027 11:34:48.804249 2100 log.go:181] (0xc00073d1e0) Data frame received for 3\nI1027 11:34:48.804286 2100 log.go:181] (0xc000842fa0) (3) Data frame handling\nI1027 11:34:48.804320 2100 log.go:181] (0xc00073d1e0) Data frame received for 5\nI1027 11:34:48.804330 2100 log.go:181] (0xc00085e000) (5) Data frame handling\nI1027 11:34:48.804346 2100 log.go:181] (0xc00085e000) (5) Data frame sent\nI1027 11:34:48.804356 2100 log.go:181] (0xc00073d1e0) Data frame received for 5\nI1027 11:34:48.804369 2100 log.go:181] (0xc00085e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30609\nConnection to 172.18.0.13 30609 port [tcp/30609] succeeded!\nI1027 11:34:48.805670 2100 log.go:181] (0xc00073d1e0) Data frame received for 1\nI1027 11:34:48.805686 2100 log.go:181] (0xc0008310e0) (1) Data frame handling\nI1027 11:34:48.805707 2100 log.go:181] (0xc0008310e0) (1) Data frame sent\nI1027 11:34:48.805722 2100 log.go:181] (0xc00073d1e0) (0xc0008310e0) Stream removed, broadcasting: 1\nI1027 11:34:48.805733 2100 log.go:181] (0xc00073d1e0) Go away received\nI1027 11:34:48.806129 2100 log.go:181] (0xc00073d1e0) (0xc0008310e0) Stream removed, broadcasting: 1\nI1027 11:34:48.806146 2100 log.go:181] (0xc00073d1e0) (0xc000842fa0) Stream removed, broadcasting: 3\nI1027 11:34:48.806154 2100 log.go:181] (0xc00073d1e0) (0xc00085e000) Stream removed, broadcasting: 5\n" Oct 27 11:34:48.812: INFO: stdout: "" Oct 27 11:34:48.812: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:34:48.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8510" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.687 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":193,"skipped":3014,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:34:48.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:34:56.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1267" for this suite. STEP: Destroying namespace "nsdeletetest-223" for this suite. Oct 27 11:34:56.338: INFO: Namespace nsdeletetest-223 was already deleted STEP: Destroying namespace "nsdeletetest-9135" for this suite. • [SLOW TEST:7.478 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":194,"skipped":3036,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:34:56.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 27 11:34:56.402: INFO: Waiting up to 5m0s for pod "pod-fb57fbc6-332f-46a2-9131-41be511bf9ea" in namespace "emptydir-9060" to be "Succeeded or Failed" Oct 27 11:34:56.542: INFO: Pod "pod-fb57fbc6-332f-46a2-9131-41be511bf9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 139.870631ms Oct 27 11:34:58.546: INFO: Pod "pod-fb57fbc6-332f-46a2-9131-41be511bf9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144482457s Oct 27 11:35:00.596: INFO: Pod "pod-fb57fbc6-332f-46a2-9131-41be511bf9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.193826164s STEP: Saw pod success Oct 27 11:35:00.596: INFO: Pod "pod-fb57fbc6-332f-46a2-9131-41be511bf9ea" satisfied condition "Succeeded or Failed" Oct 27 11:35:00.599: INFO: Trying to get logs from node kali-worker pod pod-fb57fbc6-332f-46a2-9131-41be511bf9ea container test-container: STEP: delete the pod Oct 27 11:35:00.653: INFO: Waiting for pod pod-fb57fbc6-332f-46a2-9131-41be511bf9ea to disappear Oct 27 11:35:00.657: INFO: Pod pod-fb57fbc6-332f-46a2-9131-41be511bf9ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:35:00.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9060" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:35:00.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9813 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-9813 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9813 Oct 27 11:35:00.778: INFO: Found 0 stateful pods, waiting for 1 Oct 27 11:35:10.785: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 27 11:35:10.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:35:11.119: INFO: stderr: "I1027 11:35:10.922284 2117 log.go:181] (0xc000844160) (0xc000b8c5a0) Create stream\nI1027 11:35:10.922342 2117 log.go:181] (0xc000844160) (0xc000b8c5a0) Stream added, broadcasting: 1\nI1027 11:35:10.928156 2117 log.go:181] (0xc000844160) Reply frame received for 1\nI1027 11:35:10.928211 2117 log.go:181] (0xc000844160) (0xc000b8c000) Create stream\nI1027 11:35:10.928233 2117 log.go:181] (0xc000844160) (0xc000b8c000) Stream added, broadcasting: 3\nI1027 11:35:10.929286 2117 log.go:181] (0xc000844160) Reply frame received for 3\nI1027 11:35:10.929320 2117 log.go:181] (0xc000844160) (0xc0007b6460) Create stream\nI1027 11:35:10.929330 2117 log.go:181] (0xc000844160) (0xc0007b6460) Stream added, broadcasting: 5\nI1027 11:35:10.930277 2117 log.go:181] (0xc000844160) Reply frame received for 5\nI1027 11:35:11.035633 2117 log.go:181] (0xc000844160) Data frame received for 5\nI1027 11:35:11.035669 2117 log.go:181] (0xc0007b6460) (5) Data frame handling\nI1027 11:35:11.035695 2117 log.go:181] (0xc0007b6460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:35:11.105860 2117 log.go:181] (0xc000844160) Data frame received for 5\nI1027 11:35:11.105911 2117 log.go:181] (0xc0007b6460) (5) Data frame handling\nI1027 11:35:11.105937 2117 log.go:181] (0xc000844160) Data frame received for 3\nI1027 11:35:11.105945 2117 log.go:181] (0xc000b8c000) (3) Data frame handling\nI1027 11:35:11.105961 2117 log.go:181] (0xc000b8c000) (3) Data frame sent\nI1027 11:35:11.106231 2117 log.go:181] (0xc000844160) Data frame received for 3\nI1027 11:35:11.106272 2117 log.go:181] (0xc000b8c000) (3) Data frame handling\nI1027 11:35:11.111072 2117 log.go:181] (0xc000844160) Data frame received for 1\nI1027 11:35:11.111110 2117 log.go:181] (0xc000b8c5a0) (1) Data frame handling\nI1027 11:35:11.111137 2117 log.go:181] (0xc000b8c5a0) (1) Data frame sent\nI1027 11:35:11.111409 2117 log.go:181] (0xc000844160) (0xc000b8c5a0) Stream removed, broadcasting: 1\nI1027 11:35:11.111486 2117 log.go:181] (0xc000844160) Go away received\nI1027 11:35:11.111718 2117 log.go:181] (0xc000844160) (0xc000b8c5a0) Stream removed, broadcasting: 1\nI1027 11:35:11.111735 2117 log.go:181] (0xc000844160) (0xc000b8c000) Stream removed, broadcasting: 3\nI1027 11:35:11.111747 2117 log.go:181] (0xc000844160) (0xc0007b6460) Stream removed, broadcasting: 5\n" Oct 27 11:35:11.119: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:35:11.119: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:35:11.123: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 27 11:35:21.129: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:35:21.129: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:35:21.195: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:21.195: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:21.195: INFO: Oct 27 11:35:21.195: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 27 11:35:22.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.944289073s Oct 27 11:35:23.354: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.93922441s Oct 27 11:35:24.360: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.785405566s Oct 27 11:35:25.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.779091251s Oct 27 11:35:26.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.773642165s Oct 27 11:35:27.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.75209559s Oct 27 11:35:28.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.740940566s Oct 27 11:35:29.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.736766675s Oct 27 11:35:30.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 730.672969ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9813 Oct 27 11:35:31.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:35:31.691: INFO: stderr: "I1027 11:35:31.585085 2136 log.go:181] (0xc00084b6b0) (0xc000628c80) Create stream\nI1027 11:35:31.585141 2136 log.go:181] (0xc00084b6b0) (0xc000628c80) Stream added, broadcasting: 1\nI1027 11:35:31.599662 2136 log.go:181] (0xc00084b6b0) Reply frame received for 1\nI1027 11:35:31.599723 2136 log.go:181] (0xc00084b6b0) (0xc0003cbd60) Create stream\nI1027 11:35:31.599735 2136 log.go:181] (0xc00084b6b0) (0xc0003cbd60) Stream added, broadcasting: 3\nI1027 11:35:31.603195 2136 log.go:181] (0xc00084b6b0) Reply frame received for 3\nI1027 11:35:31.603230 2136 log.go:181] (0xc00084b6b0) (0xc000628000) Create stream\nI1027 11:35:31.603243 2136 log.go:181] (0xc00084b6b0) (0xc000628000) Stream added, broadcasting: 5\nI1027 11:35:31.603958 2136 log.go:181] (0xc00084b6b0) Reply frame received for 5\nI1027 11:35:31.681757 2136 log.go:181] (0xc00084b6b0) Data frame received for 3\nI1027 11:35:31.681795 2136 log.go:181] (0xc0003cbd60) (3) Data frame handling\nI1027 11:35:31.681812 2136 log.go:181] (0xc0003cbd60) (3) Data frame sent\nI1027 11:35:31.681828 2136 log.go:181] (0xc00084b6b0) Data frame received for 3\nI1027 11:35:31.681842 2136 log.go:181] (0xc0003cbd60) (3) Data frame handling\nI1027 11:35:31.681856 2136 log.go:181] (0xc00084b6b0) Data frame received for 5\nI1027 11:35:31.681865 2136 log.go:181] (0xc000628000) (5) Data frame handling\nI1027 11:35:31.681878 2136 log.go:181] (0xc000628000) (5) Data frame sent\nI1027 11:35:31.681891 2136 log.go:181] (0xc00084b6b0) Data frame received for 5\nI1027 11:35:31.681910 2136 log.go:181] (0xc000628000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1027 11:35:31.683221 2136 log.go:181] (0xc00084b6b0) Data frame received for 1\nI1027 11:35:31.683240 2136 log.go:181] (0xc000628c80) (1) Data frame handling\nI1027 11:35:31.683247 2136 log.go:181] (0xc000628c80) (1) Data frame sent\nI1027 11:35:31.683255 2136 log.go:181] (0xc00084b6b0) (0xc000628c80) Stream removed, broadcasting: 1\nI1027 11:35:31.683266 2136 log.go:181] (0xc00084b6b0) Go away received\nI1027 11:35:31.683672 2136 log.go:181] (0xc00084b6b0) (0xc000628c80) Stream removed, broadcasting: 1\nI1027 11:35:31.683700 2136 log.go:181] (0xc00084b6b0) (0xc0003cbd60) Stream removed, broadcasting: 3\nI1027 11:35:31.683727 2136 log.go:181] (0xc00084b6b0) (0xc000628000) Stream removed, broadcasting: 5\n" Oct 27 11:35:31.691: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:35:31.691: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:35:31.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:35:31.913: INFO: stderr: "I1027 11:35:31.831733 2154 log.go:181] (0xc00027a000) (0xc000468000) Create stream\nI1027 11:35:31.831798 2154 log.go:181] (0xc00027a000) (0xc000468000) Stream added, broadcasting: 1\nI1027 11:35:31.833842 2154 log.go:181] (0xc00027a000) Reply frame received for 1\nI1027 11:35:31.833874 2154 log.go:181] (0xc00027a000) (0xc00063e000) Create stream\nI1027 11:35:31.833882 2154 log.go:181] (0xc00027a000) (0xc00063e000) Stream added, broadcasting: 3\nI1027 11:35:31.834809 2154 log.go:181] (0xc00027a000) Reply frame received for 3\nI1027 11:35:31.834839 2154 log.go:181] (0xc00027a000) (0xc00043de00) Create stream\nI1027 11:35:31.834853 2154 log.go:181] (0xc00027a000) (0xc00043de00) Stream added, broadcasting: 5\nI1027 11:35:31.835884 2154 log.go:181] (0xc00027a000) Reply frame received for 5\nI1027 11:35:31.904138 2154 log.go:181] (0xc00027a000) Data frame received for 3\nI1027 11:35:31.904179 2154 log.go:181] (0xc00063e000) (3) Data frame handling\nI1027 11:35:31.904196 2154 log.go:181] (0xc00063e000) (3) Data frame sent\nI1027 11:35:31.904209 2154 log.go:181] (0xc00027a000) Data frame received for 3\nI1027 11:35:31.904215 2154 log.go:181] (0xc00063e000) (3) Data frame handling\nI1027 11:35:31.904244 2154 log.go:181] (0xc00027a000) Data frame received for 5\nI1027 11:35:31.904251 2154 log.go:181] (0xc00043de00) (5) Data frame handling\nI1027 11:35:31.904272 2154 log.go:181] (0xc00043de00) (5) Data frame sent\nI1027 11:35:31.904287 2154 log.go:181] (0xc00027a000) Data frame received for 5\nI1027 11:35:31.904303 2154 log.go:181] (0xc00043de00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1027 11:35:31.906125 2154 log.go:181] (0xc00027a000) Data frame received for 1\nI1027 11:35:31.906161 2154 log.go:181] (0xc000468000) (1) Data frame handling\nI1027 11:35:31.906183 2154 log.go:181] (0xc000468000) (1) Data frame sent\nI1027 11:35:31.906217 2154 log.go:181] (0xc00027a000) (0xc000468000) Stream removed, broadcasting: 1\nI1027 11:35:31.906308 2154 log.go:181] (0xc00027a000) Go away received\nI1027 11:35:31.906758 2154 log.go:181] (0xc00027a000) (0xc000468000) Stream removed, broadcasting: 1\nI1027 11:35:31.906788 2154 log.go:181] (0xc00027a000) (0xc00063e000) Stream removed, broadcasting: 3\nI1027 11:35:31.906802 2154 log.go:181] (0xc00027a000) (0xc00043de00) Stream removed, broadcasting: 5\n" Oct 27 11:35:31.913: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:35:31.913: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:35:31.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 27 11:35:32.117: INFO: stderr: "I1027 11:35:32.046041 2172 log.go:181] (0xc000142370) (0xc000c8c280) Create stream\nI1027 11:35:32.046089 2172 log.go:181] (0xc000142370) (0xc000c8c280) Stream added, broadcasting: 1\nI1027 11:35:32.048320 2172 log.go:181] (0xc000142370) Reply frame received for 1\nI1027 11:35:32.048356 2172 log.go:181] (0xc000142370) (0xc00090a780) Create stream\nI1027 11:35:32.048371 2172 log.go:181] (0xc000142370) (0xc00090a780) Stream added, broadcasting: 3\nI1027 11:35:32.049539 2172 log.go:181] (0xc000142370) Reply frame received for 3\nI1027 11:35:32.049586 2172 log.go:181] (0xc000142370) (0xc00090b360) Create stream\nI1027 11:35:32.049602 2172 log.go:181] (0xc000142370) (0xc00090b360) Stream added, broadcasting: 5\nI1027 11:35:32.050592 2172 log.go:181] (0xc000142370) Reply frame received for 5\nI1027 11:35:32.110006 2172 log.go:181] (0xc000142370) Data frame received for 3\nI1027 11:35:32.110046 2172 log.go:181] (0xc000142370) Data frame received for 5\nI1027 11:35:32.110074 2172 log.go:181] (0xc00090b360) (5) Data frame handling\nI1027 11:35:32.110090 2172 log.go:181] (0xc00090b360) (5) Data frame sent\nI1027 11:35:32.110099 2172 log.go:181] (0xc000142370) Data frame received for 5\nI1027 11:35:32.110104 2172 log.go:181] (0xc00090b360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1027 11:35:32.110123 2172 log.go:181] (0xc00090a780) (3) Data frame handling\nI1027 11:35:32.110137 2172 log.go:181] (0xc00090a780) (3) Data frame sent\nI1027 11:35:32.110149 2172 log.go:181] (0xc000142370) Data frame received for 3\nI1027 11:35:32.110156 2172 log.go:181] (0xc00090a780) (3) Data frame handling\nI1027 11:35:32.111539 2172 log.go:181] (0xc000142370) Data frame received for 1\nI1027 11:35:32.111682 2172 log.go:181] (0xc000c8c280) (1) Data frame handling\nI1027 11:35:32.111728 2172 log.go:181] (0xc000c8c280) (1) Data frame sent\nI1027 11:35:32.111747 2172 log.go:181] (0xc000142370) (0xc000c8c280) Stream removed, broadcasting: 1\nI1027 11:35:32.111774 2172 log.go:181] (0xc000142370) Go away received\nI1027 11:35:32.112110 2172 log.go:181] (0xc000142370) (0xc000c8c280) Stream removed, broadcasting: 1\nI1027 11:35:32.112123 2172 log.go:181] (0xc000142370) (0xc00090a780) Stream removed, broadcasting: 3\nI1027 11:35:32.112129 2172 log.go:181] (0xc000142370) (0xc00090b360) Stream removed, broadcasting: 5\n" Oct 27 11:35:32.117: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 27 11:35:32.117: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 27 11:35:32.122: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Oct 27 11:35:42.128: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:35:42.128: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 27 11:35:42.128: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 27 11:35:42.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:35:42.404: INFO: stderr: "I1027 11:35:42.281213 2190 log.go:181] (0xc000ec3080) (0xc000e88c80) Create stream\nI1027 11:35:42.281283 2190 log.go:181] (0xc000ec3080) (0xc000e88c80) Stream added, broadcasting: 1\nI1027 11:35:42.287533 2190 log.go:181] (0xc000ec3080) Reply frame received for 1\nI1027 11:35:42.287588 2190 log.go:181] (0xc000ec3080) (0xc00073a1e0) Create stream\nI1027 11:35:42.287627 2190 log.go:181] (0xc000ec3080) (0xc00073a1e0) Stream added, broadcasting: 3\nI1027 11:35:42.288774 2190 log.go:181] (0xc000ec3080) Reply frame received for 3\nI1027 11:35:42.288810 2190 log.go:181] (0xc000ec3080) (0xc000e88d20) Create stream\nI1027 11:35:42.288820 2190 log.go:181] (0xc000ec3080) (0xc000e88d20) Stream added, broadcasting: 5\nI1027 11:35:42.290069 2190 log.go:181] (0xc000ec3080) Reply frame received for 5\nI1027 11:35:42.395283 2190 log.go:181] (0xc000ec3080) Data frame received for 3\nI1027 11:35:42.395339 2190 log.go:181] (0xc00073a1e0) (3) Data frame handling\nI1027 11:35:42.395356 2190 log.go:181] (0xc00073a1e0) (3) Data frame sent\nI1027 11:35:42.395368 2190 log.go:181] (0xc000ec3080) Data frame received for 3\nI1027 11:35:42.395384 2190 log.go:181] (0xc00073a1e0) (3) Data frame handling\nI1027 11:35:42.395424 2190 log.go:181] (0xc000ec3080) Data frame received for 5\nI1027 11:35:42.395471 2190 log.go:181] (0xc000e88d20) (5) Data frame handling\nI1027 11:35:42.395498 2190 log.go:181] (0xc000e88d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:35:42.395520 2190 log.go:181] (0xc000ec3080) Data frame received for 5\nI1027 11:35:42.395542 2190 log.go:181] (0xc000e88d20) (5) Data frame handling\nI1027 11:35:42.397139 2190 log.go:181] (0xc000ec3080) Data frame received for 1\nI1027 11:35:42.397184 2190 log.go:181] (0xc000e88c80) (1) Data frame handling\nI1027 11:35:42.397213 2190 log.go:181] (0xc000e88c80) (1) Data frame sent\nI1027 11:35:42.397378 2190 log.go:181] (0xc000ec3080) (0xc000e88c80) Stream removed, broadcasting: 1\nI1027 11:35:42.397404 2190 log.go:181] (0xc000ec3080) Go away received\nI1027 11:35:42.397896 2190 log.go:181] (0xc000ec3080) (0xc000e88c80) Stream removed, broadcasting: 1\nI1027 11:35:42.397920 2190 log.go:181] (0xc000ec3080) (0xc00073a1e0) Stream removed, broadcasting: 3\nI1027 11:35:42.397932 2190 log.go:181] (0xc000ec3080) (0xc000e88d20) Stream removed, broadcasting: 5\n" Oct 27 11:35:42.404: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:35:42.404: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:35:42.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:35:42.655: INFO: stderr: "I1027 11:35:42.543415 2208 log.go:181] (0xc000b413f0) (0xc000922640) Create stream\nI1027 11:35:42.543471 2208 log.go:181] (0xc000b413f0) (0xc000922640) Stream added, broadcasting: 1\nI1027 11:35:42.549367 2208 log.go:181] (0xc000b413f0) Reply frame received for 1\nI1027 11:35:42.549413 2208 log.go:181] (0xc000b413f0) (0xc000cb00a0) Create stream\nI1027 11:35:42.549427 2208 log.go:181] (0xc000b413f0) (0xc000cb00a0) Stream added, broadcasting: 3\nI1027 11:35:42.550448 2208 log.go:181] (0xc000b413f0) Reply frame received for 3\nI1027 11:35:42.550484 2208 log.go:181] (0xc000b413f0) (0xc00072a000) Create stream\nI1027 11:35:42.550494 2208 log.go:181] (0xc000b413f0) (0xc00072a000) Stream added, broadcasting: 5\nI1027 11:35:42.551318 2208 log.go:181] (0xc000b413f0) Reply frame received for 5\nI1027 11:35:42.616952 2208 log.go:181] (0xc000b413f0) Data frame received for 5\nI1027 11:35:42.616987 2208 log.go:181] (0xc00072a000) (5) Data frame handling\nI1027 11:35:42.617014 2208 log.go:181] (0xc00072a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:35:42.646039 2208 log.go:181] (0xc000b413f0) Data frame received for 3\nI1027 11:35:42.646083 2208 log.go:181] (0xc000cb00a0) (3) Data frame handling\nI1027 11:35:42.646104 2208 log.go:181] (0xc000cb00a0) (3) Data frame sent\nI1027 11:35:42.646150 2208 log.go:181] (0xc000b413f0) Data frame received for 5\nI1027 11:35:42.646186 2208 log.go:181] (0xc00072a000) (5) Data frame handling\nI1027 11:35:42.646210 2208 log.go:181] (0xc000b413f0) Data frame received for 3\nI1027 11:35:42.646221 2208 log.go:181] (0xc000cb00a0) (3) Data frame handling\nI1027 11:35:42.647774 2208 log.go:181] (0xc000b413f0) Data frame received for 1\nI1027 11:35:42.647790 2208 log.go:181] (0xc000922640) (1) Data frame handling\nI1027 11:35:42.647799 2208 log.go:181] (0xc000922640) (1) Data frame sent\nI1027 11:35:42.647807 2208 log.go:181] (0xc000b413f0) (0xc000922640) Stream removed, broadcasting: 1\nI1027 11:35:42.647819 2208 log.go:181] (0xc000b413f0) Go away received\nI1027 11:35:42.648243 2208 log.go:181] (0xc000b413f0) (0xc000922640) Stream removed, broadcasting: 1\nI1027 11:35:42.648265 2208 log.go:181] (0xc000b413f0) (0xc000cb00a0) Stream removed, broadcasting: 3\nI1027 11:35:42.648275 2208 log.go:181] (0xc000b413f0) (0xc00072a000) Stream removed, broadcasting: 5\n" Oct 27 11:35:42.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:35:42.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:35:42.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9813 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 27 11:35:42.930: INFO: stderr: "I1027 11:35:42.795173 2226 log.go:181] (0xc00003a0b0) (0xc000312dc0) Create stream\nI1027 11:35:42.795261 2226 log.go:181] (0xc00003a0b0) (0xc000312dc0) Stream added, broadcasting: 1\nI1027 11:35:42.797507 2226 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1027 11:35:42.797563 2226 log.go:181] (0xc00003a0b0) (0xc000890140) Create stream\nI1027 11:35:42.797578 2226 log.go:181] (0xc00003a0b0) (0xc000890140) Stream added, broadcasting: 3\nI1027 11:35:42.799386 2226 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1027 11:35:42.799432 2226 log.go:181] (0xc00003a0b0) (0xc000540780) Create stream\nI1027 11:35:42.799457 2226 log.go:181] (0xc00003a0b0) (0xc000540780) Stream added, broadcasting: 5\nI1027 11:35:42.800481 2226 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1027 11:35:42.860965 2226 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 11:35:42.860986 2226 log.go:181] (0xc000540780) (5) Data frame handling\nI1027 11:35:42.860996 2226 log.go:181] (0xc000540780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1027 11:35:42.920642 2226 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 11:35:42.920687 2226 log.go:181] (0xc000890140) (3) Data frame handling\nI1027 11:35:42.920711 2226 log.go:181] (0xc000890140) (3) Data frame sent\nI1027 11:35:42.920725 2226 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1027 11:35:42.920735 2226 log.go:181] (0xc000890140) (3) Data frame handling\nI1027 11:35:42.920769 2226 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1027 11:35:42.920792 2226 log.go:181] (0xc000540780) (5) Data frame handling\nI1027 11:35:42.922531 2226 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1027 11:35:42.922562 2226 log.go:181] (0xc000312dc0) (1) Data frame handling\nI1027 11:35:42.922594 2226 log.go:181] (0xc000312dc0) (1) Data frame sent\nI1027 11:35:42.922617 2226 log.go:181] (0xc00003a0b0) (0xc000312dc0) Stream removed, broadcasting: 1\nI1027 11:35:42.922815 2226 log.go:181] (0xc00003a0b0) Go away received\nI1027 11:35:42.923130 2226 log.go:181] (0xc00003a0b0) (0xc000312dc0) Stream removed, broadcasting: 1\nI1027 11:35:42.923161 2226 log.go:181] (0xc00003a0b0) (0xc000890140) Stream removed, broadcasting: 3\nI1027 11:35:42.923176 2226 log.go:181] (0xc00003a0b0) (0xc000540780) Stream removed, broadcasting: 5\n" Oct 27 11:35:42.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 27 11:35:42.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 27 11:35:42.930: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:35:42.934: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 27 11:35:52.944: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:35:52.944: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:35:52.944: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 27 11:35:52.969: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:52.969: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:52.969: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:52.969: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:52.969: INFO: Oct 27 11:35:52.969: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:53.975: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:53.975: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:53.975: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:53.975: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:53.975: INFO: Oct 27 11:35:53.975: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:55.010: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:55.010: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:55.010: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:55.010: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:55.010: INFO: Oct 27 11:35:55.010: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:56.015: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:56.015: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:56.015: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:56.015: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:56.015: INFO: Oct 27 11:35:56.015: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:57.020: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:57.020: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:57.020: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:57.021: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:57.021: INFO: Oct 27 11:35:57.021: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:58.027: INFO: POD NODE PHASE GRACE CONDITIONS Oct 27 11:35:58.027: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:00 +0000 UTC }] Oct 27 11:35:58.027: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:58.027: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-27 11:35:21 +0000 UTC }] Oct 27 11:35:58.027: INFO: Oct 27 11:35:58.027: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 27 11:35:59.031: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.923721488s Oct 27 11:36:00.035: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.919919733s Oct 27 11:36:01.040: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.915818828s Oct 27 11:36:02.044: INFO: Verifying statefulset ss doesn't scale past 0 for another 910.833462ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9813 Oct 27 11:36:03.049: INFO: Scaling statefulset ss to 0 Oct 27 11:36:03.061: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 11:36:03.063: INFO: Deleting all statefulset in ns statefulset-9813 Oct 27 11:36:03.065: INFO: Scaling statefulset ss to 0 Oct 27 11:36:03.074: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 11:36:03.076: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:03.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9813" for this suite. • [SLOW TEST:62.429 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":196,"skipped":3090,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:03.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 27 11:36:03.203: INFO: Waiting up to 5m0s for pod "downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be" in namespace "downward-api-3366" to be "Succeeded or Failed" Oct 27 11:36:03.259: INFO: Pod "downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be": Phase="Pending", Reason="", readiness=false. Elapsed: 55.575746ms Oct 27 11:36:05.263: INFO: Pod "downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059837237s Oct 27 11:36:07.268: INFO: Pod "downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064546559s STEP: Saw pod success Oct 27 11:36:07.268: INFO: Pod "downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be" satisfied condition "Succeeded or Failed" Oct 27 11:36:07.271: INFO: Trying to get logs from node kali-worker pod downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be container dapi-container: STEP: delete the pod Oct 27 11:36:07.309: INFO: Waiting for pod downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be to disappear Oct 27 11:36:07.344: INFO: Pod downward-api-ef1df6a0-93e9-45f6-9376-cf058b1874be no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:07.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3366" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3091,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:07.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-787d9e0f-d98c-44d6-b7b5-59232198e589 STEP: Creating secret with name s-test-opt-upd-2f177c7b-5f1e-4052-8acf-d7f2416eb7e2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-787d9e0f-d98c-44d6-b7b5-59232198e589 STEP: Updating secret s-test-opt-upd-2f177c7b-5f1e-4052-8acf-d7f2416eb7e2 STEP: Creating secret with name s-test-opt-create-bac078ac-ff28-4bc5-b469-5b985124eb0c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:17.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7784" for this suite. • [SLOW TEST:10.263 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3129,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:17.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:36:17.669: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a" in namespace "projected-5740" to be "Succeeded or Failed" Oct 27 11:36:17.680: INFO: Pod "downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.897642ms Oct 27 11:36:19.684: INFO: Pod "downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014241672s Oct 27 11:36:21.689: INFO: Pod "downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019307176s STEP: Saw pod success Oct 27 11:36:21.689: INFO: Pod "downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a" satisfied condition "Succeeded or Failed" Oct 27 11:36:21.691: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a container client-container: STEP: delete the pod Oct 27 11:36:21.742: INFO: Waiting for pod downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a to disappear Oct 27 11:36:21.744: INFO: Pod downwardapi-volume-e7dabb54-a72f-4ebe-91c9-d14a073d7e1a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:21.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5740" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:21.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 27 11:36:21.890: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1507 /api/v1/namespaces/watch-1507/configmaps/e2e-watch-test-resource-version 0547d758-5db2-47ac-a3bb-7a472d7407a7 8979239 0 2020-10-27 11:36:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-27 11:36:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 11:36:21.891: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1507 /api/v1/namespaces/watch-1507/configmaps/e2e-watch-test-resource-version 0547d758-5db2-47ac-a3bb-7a472d7407a7 8979240 0 2020-10-27 11:36:21 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-27 11:36:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:21.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1507" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":200,"skipped":3154,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:21.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:36:22.034: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:36:24.037: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:36:26.039: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:36:28.039: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:30.038: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:32.039: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:34.039: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:36.039: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:38.038: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:40.040: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = false) Oct 27 11:36:42.040: INFO: The status of Pod test-webserver-cfe55afd-f99f-4465-8b90-a1dcc0ed868b is Running (Ready = true) Oct 27 11:36:42.043: INFO: Container started at 2020-10-27 11:36:25 +0000 UTC, pod became ready at 2020-10-27 11:36:40 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:36:42.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1495" for this suite. • [SLOW TEST:20.140 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":201,"skipped":3157,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:36:42.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:36:42.170: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 27 11:36:47.173: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 27 11:36:47.173: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 27 11:36:49.177: INFO: Creating deployment "test-rollover-deployment" Oct 27 11:36:49.184: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 27 11:36:51.190: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 27 11:36:51.198: INFO: Ensure that both replica sets have 1 created replica Oct 27 11:36:51.203: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 27 11:36:51.213: INFO: Updating deployment test-rollover-deployment Oct 27 11:36:51.213: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 27 11:36:53.265: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 27 11:36:53.271: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 27 11:36:53.276: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:36:53.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395411, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:36:55.286: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:36:55.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395414, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:36:57.286: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:36:57.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395414, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:36:59.284: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:36:59.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395414, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:37:01.284: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:37:01.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395414, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:37:03.289: INFO: all replica sets need to contain the pod-template-hash label Oct 27 11:37:03.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395414, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739395409, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:37:05.392: INFO: Oct 27 11:37:05.392: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 27 11:37:05.431: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5264 /apis/apps/v1/namespaces/deployment-5264/deployments/test-rollover-deployment 1d2e171f-d1e0-49f5-8056-af1c84d28709 8979497 2 2020-10-27 11:36:49 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-27 11:36:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 11:37:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0032b58a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-27 11:36:49 +0000 UTC,LastTransitionTime:2020-10-27 11:36:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-10-27 11:37:04 +0000 UTC,LastTransitionTime:2020-10-27 11:36:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 27 11:37:05.434: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-5264 /apis/apps/v1/namespaces/deployment-5264/replicasets/test-rollover-deployment-5797c7764 e4c1126f-6ded-425b-b0c3-87fb3ce4a490 8979485 2 2020-10-27 11:36:51 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1d2e171f-d1e0-49f5-8056-af1c84d28709 0xc007bef4d0 0xc007bef4d1}] [] [{kube-controller-manager Update apps/v1 2020-10-27 11:37:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d2e171f-d1e0-49f5-8056-af1c84d28709\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007bef548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:37:05.434: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 27 11:37:05.434: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5264 /apis/apps/v1/namespaces/deployment-5264/replicasets/test-rollover-controller 8597ba68-a035-4c6e-b8f2-8abafa2c9616 8979495 2 2020-10-27 11:36:42 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1d2e171f-d1e0-49f5-8056-af1c84d28709 0xc007bef3bf 0xc007bef3d0}] [] [{e2e.test Update apps/v1 2020-10-27 11:36:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-27 11:37:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d2e171f-d1e0-49f5-8056-af1c84d28709\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc007bef468 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:37:05.435: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-5264 /apis/apps/v1/namespaces/deployment-5264/replicasets/test-rollover-deployment-78bc8b888c 56a9a8e9-95dd-43b3-a30d-d6014ee2b3dc 8979436 2 2020-10-27 11:36:49 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1d2e171f-d1e0-49f5-8056-af1c84d28709 0xc007bef5b7 0xc007bef5b8}] [] [{kube-controller-manager Update apps/v1 2020-10-27 11:36:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1d2e171f-d1e0-49f5-8056-af1c84d28709\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc007bef648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 27 11:37:05.437: INFO: Pod "test-rollover-deployment-5797c7764-56hns" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-56hns test-rollover-deployment-5797c7764- deployment-5264 /api/v1/namespaces/deployment-5264/pods/test-rollover-deployment-5797c7764-56hns 6bddbea1-0428-4aa3-b15c-ee32379fc78f 8979451 0 2020-10-27 11:36:51 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 e4c1126f-6ded-425b-b0c3-87fb3ce4a490 0xc0032b5e40 0xc0032b5e41}] [] [{kube-controller-manager Update v1 2020-10-27 11:36:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e4c1126f-6ded-425b-b0c3-87fb3ce4a490\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 11:36:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8fzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8fzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8fzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:36:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:36:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 11:36:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.107,StartTime:2020-10-27 11:36:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 11:36:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://d7393966ee1d59c50264a2d8382e2f7b7dc624610b78bfbe5f8d575549db22b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.107,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:37:05.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5264" for this suite. • [SLOW TEST:23.390 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":202,"skipped":3179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:37:05.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:37:05.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d" in namespace "projected-9638" to be "Succeeded or Failed" Oct 27 11:37:05.581: INFO: Pod "downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.1825ms Oct 27 11:37:07.586: INFO: Pod "downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019926013s Oct 27 11:37:09.591: INFO: Pod "downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024591916s STEP: Saw pod success Oct 27 11:37:09.591: INFO: Pod "downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d" satisfied condition "Succeeded or Failed" Oct 27 11:37:09.595: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d container client-container: STEP: delete the pod Oct 27 11:37:09.627: INFO: Waiting for pod downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d to disappear Oct 27 11:37:09.638: INFO: Pod downwardapi-volume-05b9a87c-d96f-4f78-b5a3-064f343d651d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:37:09.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9638" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3208,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:37:09.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-8zjd STEP: Creating a pod to test atomic-volume-subpath Oct 27 11:37:09.754: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8zjd" in namespace "subpath-1551" to be "Succeeded or Failed" Oct 27 11:37:09.758: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.754777ms Oct 27 11:37:11.837: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082772029s Oct 27 11:37:13.840: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086025216s Oct 27 11:37:15.844: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 6.090304851s Oct 27 11:37:17.849: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 8.094954897s Oct 27 11:37:19.852: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 10.098316295s Oct 27 11:37:21.890: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 12.135809494s Oct 27 11:37:23.911: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 14.156678362s Oct 27 11:37:25.915: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 16.160876589s Oct 27 11:37:27.921: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 18.167521953s Oct 27 11:37:29.925: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 20.171036612s Oct 27 11:37:31.929: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Running", Reason="", readiness=true. Elapsed: 22.175194997s Oct 27 11:37:33.934: INFO: Pod "pod-subpath-test-downwardapi-8zjd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.179807829s STEP: Saw pod success Oct 27 11:37:33.934: INFO: Pod "pod-subpath-test-downwardapi-8zjd" satisfied condition "Succeeded or Failed" Oct 27 11:37:33.937: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-8zjd container test-container-subpath-downwardapi-8zjd: STEP: delete the pod Oct 27 11:37:34.000: INFO: Waiting for pod pod-subpath-test-downwardapi-8zjd to disappear Oct 27 11:37:34.007: INFO: Pod pod-subpath-test-downwardapi-8zjd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-8zjd Oct 27 11:37:34.007: INFO: Deleting pod "pod-subpath-test-downwardapi-8zjd" in namespace "subpath-1551" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:37:34.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1551" for this suite. • [SLOW TEST:24.543 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":204,"skipped":3212,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:37:34.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 27 11:37:34.452: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 27 11:37:45.316: INFO: >>> kubeConfig: /root/.kube/config Oct 27 11:37:48.299: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:01.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2798" for this suite. • [SLOW TEST:26.957 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":205,"skipped":3228,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:01.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 27 11:38:01.217: INFO: Waiting up to 5m0s for pod "pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981" in namespace "emptydir-2478" to be "Succeeded or Failed" Oct 27 11:38:01.230: INFO: Pod "pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981": Phase="Pending", Reason="", readiness=false. Elapsed: 12.791691ms Oct 27 11:38:03.294: INFO: Pod "pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076216839s Oct 27 11:38:05.299: INFO: Pod "pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081032584s STEP: Saw pod success Oct 27 11:38:05.299: INFO: Pod "pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981" satisfied condition "Succeeded or Failed" Oct 27 11:38:05.301: INFO: Trying to get logs from node kali-worker pod pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981 container test-container: STEP: delete the pod Oct 27 11:38:05.336: INFO: Waiting for pod pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981 to disappear Oct 27 11:38:05.346: INFO: Pod pod-53dbc815-ef57-41f6-9bfb-864f8b4ba981 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:05.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2478" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:05.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Oct 27 11:38:05.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config cluster-info' Oct 27 11:38:05.500: INFO: stderr: "" Oct 27 11:38:05.500: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:05.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1577" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":207,"skipped":3261,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:05.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5088 STEP: creating service affinity-nodeport in namespace services-5088 STEP: creating replication controller affinity-nodeport in namespace services-5088 I1027 11:38:05.701084 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5088, replica count: 3 I1027 11:38:08.751476 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:38:11.751730 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:38:11.762: INFO: Creating new exec pod Oct 27 11:38:17.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5088 execpod-affinity529ls -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Oct 27 11:38:17.245: INFO: stderr: "I1027 11:38:17.140721 2262 log.go:181] (0xc00047c000) (0xc0003292c0) Create stream\nI1027 11:38:17.140810 2262 log.go:181] (0xc00047c000) (0xc0003292c0) Stream added, broadcasting: 1\nI1027 11:38:17.143092 2262 log.go:181] (0xc00047c000) Reply frame received for 1\nI1027 11:38:17.143121 2262 log.go:181] (0xc00047c000) (0xc000d76000) Create stream\nI1027 11:38:17.143130 2262 log.go:181] (0xc00047c000) (0xc000d76000) Stream added, broadcasting: 3\nI1027 11:38:17.144054 2262 log.go:181] (0xc00047c000) Reply frame received for 3\nI1027 11:38:17.144088 2262 log.go:181] (0xc00047c000) (0xc0009b8000) Create stream\nI1027 11:38:17.144103 2262 log.go:181] (0xc00047c000) (0xc0009b8000) Stream added, broadcasting: 5\nI1027 11:38:17.145230 2262 log.go:181] (0xc00047c000) Reply frame received for 5\nI1027 11:38:17.238454 2262 log.go:181] (0xc00047c000) Data frame received for 5\nI1027 11:38:17.238482 2262 log.go:181] (0xc0009b8000) (5) Data frame handling\nI1027 11:38:17.238498 2262 log.go:181] (0xc0009b8000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI1027 11:38:17.238794 2262 log.go:181] (0xc00047c000) Data frame received for 5\nI1027 11:38:17.238815 2262 log.go:181] (0xc0009b8000) (5) Data frame handling\nI1027 11:38:17.238836 2262 log.go:181] (0xc0009b8000) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1027 11:38:17.239111 2262 log.go:181] (0xc00047c000) Data frame received for 5\nI1027 11:38:17.239134 2262 log.go:181] (0xc0009b8000) (5) Data frame handling\nI1027 11:38:17.239153 2262 log.go:181] (0xc00047c000) Data frame received for 3\nI1027 11:38:17.239161 2262 log.go:181] (0xc000d76000) (3) Data frame handling\nI1027 11:38:17.240491 2262 log.go:181] (0xc00047c000) Data frame received for 1\nI1027 11:38:17.240520 2262 log.go:181] (0xc0003292c0) (1) Data frame handling\nI1027 11:38:17.240535 2262 log.go:181] (0xc0003292c0) (1) Data frame sent\nI1027 11:38:17.240550 2262 log.go:181] (0xc00047c000) (0xc0003292c0) Stream removed, broadcasting: 1\nI1027 11:38:17.240580 2262 log.go:181] (0xc00047c000) Go away received\nI1027 11:38:17.240898 2262 log.go:181] (0xc00047c000) (0xc0003292c0) Stream removed, broadcasting: 1\nI1027 11:38:17.240916 2262 log.go:181] (0xc00047c000) (0xc000d76000) Stream removed, broadcasting: 3\nI1027 11:38:17.240921 2262 log.go:181] (0xc00047c000) (0xc0009b8000) Stream removed, broadcasting: 5\n" Oct 27 11:38:17.245: INFO: stdout: "" Oct 27 11:38:17.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5088 execpod-affinity529ls -- /bin/sh -x -c nc -zv -t -w 2 10.107.2.161 80' Oct 27 11:38:17.464: INFO: stderr: "I1027 11:38:17.381355 2280 log.go:181] (0xc000d12f20) (0xc00068ebe0) Create stream\nI1027 11:38:17.381422 2280 log.go:181] (0xc000d12f20) (0xc00068ebe0) Stream added, broadcasting: 1\nI1027 11:38:17.386499 2280 log.go:181] (0xc000d12f20) Reply frame received for 1\nI1027 11:38:17.386603 2280 log.go:181] (0xc000d12f20) (0xc000bb4000) Create stream\nI1027 11:38:17.386652 2280 log.go:181] (0xc000d12f20) (0xc000bb4000) Stream added, broadcasting: 3\nI1027 11:38:17.387543 2280 log.go:181] (0xc000d12f20) Reply frame received for 3\nI1027 11:38:17.387579 2280 log.go:181] (0xc000d12f20) (0xc000132460) Create stream\nI1027 11:38:17.387590 2280 log.go:181] (0xc000d12f20) (0xc000132460) Stream added, broadcasting: 5\nI1027 11:38:17.388280 2280 log.go:181] (0xc000d12f20) Reply frame received for 5\nI1027 11:38:17.455766 2280 log.go:181] (0xc000d12f20) Data frame received for 3\nI1027 11:38:17.455825 2280 log.go:181] (0xc000bb4000) (3) Data frame handling\nI1027 11:38:17.455870 2280 log.go:181] (0xc000d12f20) Data frame received for 5\nI1027 11:38:17.455885 2280 log.go:181] (0xc000132460) (5) Data frame handling\nI1027 11:38:17.455900 2280 log.go:181] (0xc000132460) (5) Data frame sent\nI1027 11:38:17.455912 2280 log.go:181] (0xc000d12f20) Data frame received for 5\nI1027 11:38:17.455923 2280 log.go:181] (0xc000132460) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.2.161 80\nConnection to 10.107.2.161 80 port [tcp/http] succeeded!\nI1027 11:38:17.457344 2280 log.go:181] (0xc000d12f20) Data frame received for 1\nI1027 11:38:17.457366 2280 log.go:181] (0xc00068ebe0) (1) Data frame handling\nI1027 11:38:17.457382 2280 log.go:181] (0xc00068ebe0) (1) Data frame sent\nI1027 11:38:17.457591 2280 log.go:181] (0xc000d12f20) (0xc00068ebe0) Stream removed, broadcasting: 1\nI1027 11:38:17.458011 2280 log.go:181] (0xc000d12f20) (0xc00068ebe0) Stream removed, broadcasting: 1\nI1027 11:38:17.458036 2280 log.go:181] (0xc000d12f20) (0xc000bb4000) Stream removed, broadcasting: 3\nI1027 11:38:17.458180 2280 log.go:181] (0xc000d12f20) (0xc000132460) Stream removed, broadcasting: 5\n" Oct 27 11:38:17.465: INFO: stdout: "" Oct 27 11:38:17.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5088 execpod-affinity529ls -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32742' Oct 27 11:38:17.672: INFO: stderr: "I1027 11:38:17.586260 2298 log.go:181] (0xc000750fd0) (0xc000b2a960) Create stream\nI1027 11:38:17.586316 2298 log.go:181] (0xc000750fd0) (0xc000b2a960) Stream added, broadcasting: 1\nI1027 11:38:17.591428 2298 log.go:181] (0xc000750fd0) Reply frame received for 1\nI1027 11:38:17.591482 2298 log.go:181] (0xc000750fd0) (0xc000d960a0) Create stream\nI1027 11:38:17.591497 2298 log.go:181] (0xc000750fd0) (0xc000d960a0) Stream added, broadcasting: 3\nI1027 11:38:17.592527 2298 log.go:181] (0xc000750fd0) Reply frame received for 3\nI1027 11:38:17.592564 2298 log.go:181] (0xc000750fd0) (0xc000d96140) Create stream\nI1027 11:38:17.592581 2298 log.go:181] (0xc000750fd0) (0xc000d96140) Stream added, broadcasting: 5\nI1027 11:38:17.593495 2298 log.go:181] (0xc000750fd0) Reply frame received for 5\nI1027 11:38:17.659606 2298 log.go:181] (0xc000750fd0) Data frame received for 5\nI1027 11:38:17.659657 2298 log.go:181] (0xc000d96140) (5) Data frame handling\nI1027 11:38:17.659729 2298 log.go:181] (0xc000d96140) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 32742\nConnection to 172.18.0.12 32742 port [tcp/32742] succeeded!\nI1027 11:38:17.659821 2298 log.go:181] (0xc000750fd0) Data frame received for 5\nI1027 11:38:17.659845 2298 log.go:181] (0xc000d96140) (5) Data frame handling\nI1027 11:38:17.660039 2298 log.go:181] (0xc000750fd0) Data frame received for 3\nI1027 11:38:17.660074 2298 log.go:181] (0xc000d960a0) (3) Data frame handling\nI1027 11:38:17.661695 2298 log.go:181] (0xc000750fd0) Data frame received for 1\nI1027 11:38:17.661731 2298 log.go:181] (0xc000b2a960) (1) Data frame handling\nI1027 11:38:17.661747 2298 log.go:181] (0xc000b2a960) (1) Data frame sent\nI1027 11:38:17.661769 2298 log.go:181] (0xc000750fd0) (0xc000b2a960) Stream removed, broadcasting: 1\nI1027 11:38:17.661790 2298 log.go:181] (0xc000750fd0) Go away received\nI1027 11:38:17.662354 2298 log.go:181] (0xc000750fd0) (0xc000b2a960) Stream removed, broadcasting: 1\nI1027 11:38:17.662376 2298 log.go:181] (0xc000750fd0) (0xc000d960a0) Stream removed, broadcasting: 3\nI1027 11:38:17.662388 2298 log.go:181] (0xc000750fd0) (0xc000d96140) Stream removed, broadcasting: 5\n" Oct 27 11:38:17.672: INFO: stdout: "" Oct 27 11:38:17.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5088 execpod-affinity529ls -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32742' Oct 27 11:38:17.878: INFO: stderr: "I1027 11:38:17.794719 2316 log.go:181] (0xc000bb7760) (0xc000b54960) Create stream\nI1027 11:38:17.794774 2316 log.go:181] (0xc000bb7760) (0xc000b54960) Stream added, broadcasting: 1\nI1027 11:38:17.797343 2316 log.go:181] (0xc000bb7760) Reply frame received for 1\nI1027 11:38:17.797369 2316 log.go:181] (0xc000bb7760) (0xc0004ce3c0) Create stream\nI1027 11:38:17.797379 2316 log.go:181] (0xc000bb7760) (0xc0004ce3c0) Stream added, broadcasting: 3\nI1027 11:38:17.798256 2316 log.go:181] (0xc000bb7760) Reply frame received for 3\nI1027 11:38:17.798294 2316 log.go:181] (0xc000bb7760) (0xc0004ce460) Create stream\nI1027 11:38:17.798307 2316 log.go:181] (0xc000bb7760) (0xc0004ce460) Stream added, broadcasting: 5\nI1027 11:38:17.799286 2316 log.go:181] (0xc000bb7760) Reply frame received for 5\nI1027 11:38:17.868364 2316 log.go:181] (0xc000bb7760) Data frame received for 3\nI1027 11:38:17.868431 2316 log.go:181] (0xc000bb7760) Data frame received for 5\nI1027 11:38:17.868486 2316 log.go:181] (0xc0004ce460) (5) Data frame handling\nI1027 11:38:17.868514 2316 log.go:181] (0xc0004ce460) (5) Data frame sent\nI1027 11:38:17.868533 2316 log.go:181] (0xc000bb7760) Data frame received for 5\nI1027 11:38:17.868551 2316 log.go:181] (0xc0004ce460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32742\nConnection to 172.18.0.13 32742 port [tcp/32742] succeeded!\nI1027 11:38:17.868576 2316 log.go:181] (0xc0004ce3c0) (3) Data frame handling\nI1027 11:38:17.870376 2316 log.go:181] (0xc000bb7760) Data frame received for 1\nI1027 11:38:17.870427 2316 log.go:181] (0xc000b54960) (1) Data frame handling\nI1027 11:38:17.870452 2316 log.go:181] (0xc000b54960) (1) Data frame sent\nI1027 11:38:17.870480 2316 log.go:181] (0xc000bb7760) (0xc000b54960) Stream removed, broadcasting: 1\nI1027 11:38:17.870538 2316 log.go:181] (0xc000bb7760) Go away received\nI1027 11:38:17.871040 2316 log.go:181] (0xc000bb7760) (0xc000b54960) Stream removed, broadcasting: 1\nI1027 11:38:17.871065 2316 log.go:181] (0xc000bb7760) (0xc0004ce3c0) Stream removed, broadcasting: 3\nI1027 11:38:17.871078 2316 log.go:181] (0xc000bb7760) (0xc0004ce460) Stream removed, broadcasting: 5\n" Oct 27 11:38:17.878: INFO: stdout: "" Oct 27 11:38:17.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5088 execpod-affinity529ls -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32742/ ; done' Oct 27 11:38:18.178: INFO: stderr: "I1027 11:38:18.009852 2333 log.go:181] (0xc00003ad10) (0xc00017ea00) Create stream\nI1027 11:38:18.009894 2333 log.go:181] (0xc00003ad10) (0xc00017ea00) Stream added, broadcasting: 1\nI1027 11:38:18.011870 2333 log.go:181] (0xc00003ad10) Reply frame received for 1\nI1027 11:38:18.011908 2333 log.go:181] (0xc00003ad10) (0xc00036e140) Create stream\nI1027 11:38:18.011929 2333 log.go:181] (0xc00003ad10) (0xc00036e140) Stream added, broadcasting: 3\nI1027 11:38:18.012541 2333 log.go:181] (0xc00003ad10) Reply frame received for 3\nI1027 11:38:18.012567 2333 log.go:181] (0xc00003ad10) (0xc0003bc5a0) Create stream\nI1027 11:38:18.012575 2333 log.go:181] (0xc00003ad10) (0xc0003bc5a0) Stream added, broadcasting: 5\nI1027 11:38:18.013273 2333 log.go:181] (0xc00003ad10) Reply frame received for 5\nI1027 11:38:18.083555 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.083598 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.083612 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.083634 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.083643 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.083654 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.088638 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.088659 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.088678 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.089036 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.089126 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.089142 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.089155 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.089160 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.089165 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.094033 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.094055 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.094073 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.094453 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.094465 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.094471 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.094480 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.094485 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.094489 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.098551 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.098566 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.098578 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.099155 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.099190 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.099227 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.099273 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.099291 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.099308 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.104714 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.104739 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.104760 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.105787 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.105806 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.105817 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.105833 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.105841 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.105849 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.109189 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.109203 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.109208 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.109611 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.109624 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.109630 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\nI1027 11:38:18.109639 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.109644 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.109652 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.109721 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.109749 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.109778 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\nI1027 11:38:18.116580 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.116606 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.116622 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.117112 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.117133 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.117145 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.117217 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.117238 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.117253 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.120376 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.120388 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.120402 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.120663 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.120692 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.120714 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.120740 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.120751 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.120764 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\nI1027 11:38:18.127409 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.127422 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.127431 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.128014 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.128037 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.128050 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.128076 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.128087 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.128099 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.131414 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.131430 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.131444 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.131976 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.132004 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.132015 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.132030 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.132037 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.132044 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.135901 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.135919 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.135928 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.136461 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.136492 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.136506 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.136522 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.136531 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.136542 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.141680 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.141702 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.141725 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.142292 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.142316 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.142357 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.142376 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.142395 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.142406 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.146497 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.146523 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.146549 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.147419 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.147437 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.147448 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.147466 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.147479 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.147489 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.152580 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.152594 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.152612 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.153150 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.153167 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.153182 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.153201 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.153224 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\nI1027 11:38:18.153238 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.157890 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.157905 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.157918 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.158398 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.158436 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.158453 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.158479 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.158497 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.158512 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.162780 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.162804 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.162824 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.163592 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.163633 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.163647 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.163665 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.163676 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.163686 2333 log.go:181] (0xc0003bc5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32742/\nI1027 11:38:18.169008 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.169035 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.169054 2333 log.go:181] (0xc00036e140) (3) Data frame sent\nI1027 11:38:18.169696 2333 log.go:181] (0xc00003ad10) Data frame received for 5\nI1027 11:38:18.169725 2333 log.go:181] (0xc0003bc5a0) (5) Data frame handling\nI1027 11:38:18.169777 2333 log.go:181] (0xc00003ad10) Data frame received for 3\nI1027 11:38:18.169797 2333 log.go:181] (0xc00036e140) (3) Data frame handling\nI1027 11:38:18.171598 2333 log.go:181] (0xc00003ad10) Data frame received for 1\nI1027 11:38:18.171623 2333 log.go:181] (0xc00017ea00) (1) Data frame handling\nI1027 11:38:18.171650 2333 log.go:181] (0xc00017ea00) (1) Data frame sent\nI1027 11:38:18.171904 2333 log.go:181] (0xc00003ad10) (0xc00017ea00) Stream removed, broadcasting: 1\nI1027 11:38:18.171934 2333 log.go:181] (0xc00003ad10) Go away received\nI1027 11:38:18.172359 2333 log.go:181] (0xc00003ad10) (0xc00017ea00) Stream removed, broadcasting: 1\nI1027 11:38:18.172384 2333 log.go:181] (0xc00003ad10) (0xc00036e140) Stream removed, broadcasting: 3\nI1027 11:38:18.172398 2333 log.go:181] (0xc00003ad10) (0xc0003bc5a0) Stream removed, broadcasting: 5\n" Oct 27 11:38:18.179: INFO: stdout: "\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b\naffinity-nodeport-fds5b" Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Received response from host: affinity-nodeport-fds5b Oct 27 11:38:18.179: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5088, will wait for the garbage collector to delete the pods Oct 27 11:38:18.334: INFO: Deleting ReplicationController affinity-nodeport took: 5.582457ms Oct 27 11:38:18.934: INFO: Terminating ReplicationController affinity-nodeport pods took: 600.201514ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:28.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5088" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.307 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":208,"skipped":3272,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:28.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:38:28.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f" in namespace "projected-4573" to be "Succeeded or Failed" Oct 27 11:38:28.901: INFO: Pod "downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.534665ms Oct 27 11:38:31.054: INFO: Pod "downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168671323s Oct 27 11:38:33.059: INFO: Pod "downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173888764s STEP: Saw pod success Oct 27 11:38:33.059: INFO: Pod "downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f" satisfied condition "Succeeded or Failed" Oct 27 11:38:33.063: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f container client-container: STEP: delete the pod Oct 27 11:38:33.158: INFO: Waiting for pod downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f to disappear Oct 27 11:38:33.164: INFO: Pod downwardapi-volume-7948603a-ce39-4188-804d-0152cfe65b1f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:33.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4573" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3287,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:33.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:38:33.307: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3" in namespace "projected-6681" to be "Succeeded or Failed" Oct 27 11:38:33.363: INFO: Pod "downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.354918ms Oct 27 11:38:35.366: INFO: Pod "downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058668491s Oct 27 11:38:37.401: INFO: Pod "downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093497145s STEP: Saw pod success Oct 27 11:38:37.401: INFO: Pod "downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3" satisfied condition "Succeeded or Failed" Oct 27 11:38:37.403: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3 container client-container: STEP: delete the pod Oct 27 11:38:37.445: INFO: Waiting for pod downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3 to disappear Oct 27 11:38:37.449: INFO: Pod downwardapi-volume-b5fdd0b3-bf3c-40f0-8455-638736bc1ec3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:38:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6681" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:38:37.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-8731c90a-cff1-47ff-b516-630e1a312098 in namespace container-probe-6487 Oct 27 11:38:41.579: INFO: Started pod liveness-8731c90a-cff1-47ff-b516-630e1a312098 in namespace container-probe-6487 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 11:38:41.581: INFO: Initial restart count of pod liveness-8731c90a-cff1-47ff-b516-630e1a312098 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:42:42.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6487" for this suite. • [SLOW TEST:245.010 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:42:42.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:42:42.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b" in namespace "downward-api-3386" to be "Succeeded or Failed" Oct 27 11:42:42.896: INFO: Pod "downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 129.429498ms Oct 27 11:42:44.900: INFO: Pod "downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133911022s Oct 27 11:42:46.905: INFO: Pod "downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138759084s STEP: Saw pod success Oct 27 11:42:46.905: INFO: Pod "downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b" satisfied condition "Succeeded or Failed" Oct 27 11:42:46.908: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b container client-container: STEP: delete the pod Oct 27 11:42:46.951: INFO: Waiting for pod downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b to disappear Oct 27 11:42:46.958: INFO: Pod downwardapi-volume-10188b93-d985-4cf3-8813-5b692dd20c8b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:42:46.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3386" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3382,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:42:46.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ed149fa5-6a3d-4208-96ae-f83406ec48b7 STEP: Creating a pod to test consume configMaps Oct 27 11:42:47.055: INFO: Waiting up to 5m0s for pod "pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8" in namespace "configmap-1457" to be "Succeeded or Failed" Oct 27 11:42:47.090: INFO: Pod "pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.065021ms Oct 27 11:42:49.243: INFO: Pod "pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187511531s Oct 27 11:42:51.248: INFO: Pod "pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.192160516s STEP: Saw pod success Oct 27 11:42:51.248: INFO: Pod "pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8" satisfied condition "Succeeded or Failed" Oct 27 11:42:51.251: INFO: Trying to get logs from node kali-worker pod pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8 container configmap-volume-test: STEP: delete the pod Oct 27 11:42:51.507: INFO: Waiting for pod pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8 to disappear Oct 27 11:42:51.539: INFO: Pod pod-configmaps-00e87dd6-e110-48fe-bede-43e52e0f29e8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:42:51.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1457" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":213,"skipped":3394,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:42:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 27 11:42:51.729: INFO: Waiting up to 5m0s for pod "pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3" in namespace "emptydir-9616" to be "Succeeded or Failed" Oct 27 11:42:51.744: INFO: Pod "pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.719264ms Oct 27 11:42:53.747: INFO: Pod "pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018405918s Oct 27 11:42:55.752: INFO: Pod "pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023117638s STEP: Saw pod success Oct 27 11:42:55.752: INFO: Pod "pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3" satisfied condition "Succeeded or Failed" Oct 27 11:42:55.756: INFO: Trying to get logs from node kali-worker pod pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3 container test-container: STEP: delete the pod Oct 27 11:42:55.783: INFO: Waiting for pod pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3 to disappear Oct 27 11:42:55.790: INFO: Pod pod-dfe4cdb9-7cb8-4794-86b9-6dbedfca85c3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:42:55.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9616" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":214,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:42:55.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:43:06.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4510" for this suite. • [SLOW TEST:11.172 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":215,"skipped":3465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:43:06.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Oct 27 11:43:07.055: INFO: Waiting up to 5m0s for pod "pod-90c9cb6e-fd46-4532-82d7-259fa846e788" in namespace "emptydir-2314" to be "Succeeded or Failed" Oct 27 11:43:07.066: INFO: Pod "pod-90c9cb6e-fd46-4532-82d7-259fa846e788": Phase="Pending", Reason="", readiness=false. Elapsed: 11.642842ms Oct 27 11:43:09.111: INFO: Pod "pod-90c9cb6e-fd46-4532-82d7-259fa846e788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05615462s Oct 27 11:43:11.116: INFO: Pod "pod-90c9cb6e-fd46-4532-82d7-259fa846e788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061011179s STEP: Saw pod success Oct 27 11:43:11.116: INFO: Pod "pod-90c9cb6e-fd46-4532-82d7-259fa846e788" satisfied condition "Succeeded or Failed" Oct 27 11:43:11.119: INFO: Trying to get logs from node kali-worker pod pod-90c9cb6e-fd46-4532-82d7-259fa846e788 container test-container: STEP: delete the pod Oct 27 11:43:11.162: INFO: Waiting for pod pod-90c9cb6e-fd46-4532-82d7-259fa846e788 to disappear Oct 27 11:43:11.194: INFO: Pod pod-90c9cb6e-fd46-4532-82d7-259fa846e788 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:43:11.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2314" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3491,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:43:11.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Oct 27 11:43:11.267: INFO: Waiting up to 5m0s for pod "var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67" in namespace "var-expansion-7002" to be "Succeeded or Failed" Oct 27 11:43:11.270: INFO: Pod "var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349642ms Oct 27 11:43:13.277: INFO: Pod "var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010160222s Oct 27 11:43:15.282: INFO: Pod "var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015157528s STEP: Saw pod success Oct 27 11:43:15.282: INFO: Pod "var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67" satisfied condition "Succeeded or Failed" Oct 27 11:43:15.285: INFO: Trying to get logs from node kali-worker pod var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67 container dapi-container: STEP: delete the pod Oct 27 11:43:15.312: INFO: Waiting for pod var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67 to disappear Oct 27 11:43:15.318: INFO: Pod var-expansion-0130c603-1f4a-47bc-9b9a-4afe05f48a67 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:43:15.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7002" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3494,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:43:15.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2489 STEP: creating service affinity-clusterip-transition in namespace services-2489 STEP: creating replication controller affinity-clusterip-transition in namespace services-2489 I1027 11:43:15.506690 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2489, replica count: 3 I1027 11:43:18.557186 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:43:21.557394 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:43:21.572: INFO: Creating new exec pod Oct 27 11:43:26.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2489 execpod-affinityf744w -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Oct 27 11:43:26.823: INFO: stderr: "I1027 11:43:26.736443 2351 log.go:181] (0xc00003ba20) (0xc000c06aa0) Create stream\nI1027 11:43:26.736521 2351 log.go:181] (0xc00003ba20) (0xc000c06aa0) Stream added, broadcasting: 1\nI1027 11:43:26.739738 2351 log.go:181] (0xc00003ba20) Reply frame received for 1\nI1027 11:43:26.739825 2351 log.go:181] (0xc00003ba20) (0xc0009ac280) Create stream\nI1027 11:43:26.739857 2351 log.go:181] (0xc00003ba20) (0xc0009ac280) Stream added, broadcasting: 3\nI1027 11:43:26.741563 2351 log.go:181] (0xc00003ba20) Reply frame received for 3\nI1027 11:43:26.741599 2351 log.go:181] (0xc00003ba20) (0xc000b7e000) Create stream\nI1027 11:43:26.741607 2351 log.go:181] (0xc00003ba20) (0xc000b7e000) Stream added, broadcasting: 5\nI1027 11:43:26.742665 2351 log.go:181] (0xc00003ba20) Reply frame received for 5\nI1027 11:43:26.813543 2351 log.go:181] (0xc00003ba20) Data frame received for 5\nI1027 11:43:26.813572 2351 log.go:181] (0xc000b7e000) (5) Data frame handling\nI1027 11:43:26.813588 2351 log.go:181] (0xc000b7e000) (5) Data frame sent\nI1027 11:43:26.813597 2351 log.go:181] (0xc00003ba20) Data frame received for 5\nI1027 11:43:26.813603 2351 log.go:181] (0xc000b7e000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1027 11:43:26.813665 2351 log.go:181] (0xc00003ba20) Data frame received for 3\nI1027 11:43:26.813693 2351 log.go:181] (0xc0009ac280) (3) Data frame handling\nI1027 11:43:26.815434 2351 log.go:181] (0xc00003ba20) Data frame received for 1\nI1027 11:43:26.815447 2351 log.go:181] (0xc000c06aa0) (1) Data frame handling\nI1027 11:43:26.815457 2351 log.go:181] (0xc000c06aa0) (1) Data frame sent\nI1027 11:43:26.815688 2351 log.go:181] (0xc00003ba20) (0xc000c06aa0) Stream removed, broadcasting: 1\nI1027 11:43:26.815729 2351 log.go:181] (0xc00003ba20) Go away received\nI1027 11:43:26.816241 2351 log.go:181] (0xc00003ba20) (0xc000c06aa0) Stream removed, broadcasting: 1\nI1027 11:43:26.816261 2351 log.go:181] (0xc00003ba20) (0xc0009ac280) Stream removed, broadcasting: 3\nI1027 11:43:26.816271 2351 log.go:181] (0xc00003ba20) (0xc000b7e000) Stream removed, broadcasting: 5\n" Oct 27 11:43:26.824: INFO: stdout: "" Oct 27 11:43:26.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2489 execpod-affinityf744w -- /bin/sh -x -c nc -zv -t -w 2 10.97.71.174 80' Oct 27 11:43:27.037: INFO: stderr: "I1027 11:43:26.963483 2369 log.go:181] (0xc00003a420) (0xc000e12000) Create stream\nI1027 11:43:26.963532 2369 log.go:181] (0xc00003a420) (0xc000e12000) Stream added, broadcasting: 1\nI1027 11:43:26.964822 2369 log.go:181] (0xc00003a420) Reply frame received for 1\nI1027 11:43:26.964921 2369 log.go:181] (0xc00003a420) (0xc000e120a0) Create stream\nI1027 11:43:26.964929 2369 log.go:181] (0xc00003a420) (0xc000e120a0) Stream added, broadcasting: 3\nI1027 11:43:26.965610 2369 log.go:181] (0xc00003a420) Reply frame received for 3\nI1027 11:43:26.965634 2369 log.go:181] (0xc00003a420) (0xc000889a40) Create stream\nI1027 11:43:26.965647 2369 log.go:181] (0xc00003a420) (0xc000889a40) Stream added, broadcasting: 5\nI1027 11:43:26.966435 2369 log.go:181] (0xc00003a420) Reply frame received for 5\nI1027 11:43:27.029859 2369 log.go:181] (0xc00003a420) Data frame received for 3\nI1027 11:43:27.029903 2369 log.go:181] (0xc000e120a0) (3) Data frame handling\nI1027 11:43:27.029923 2369 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:43:27.029934 2369 log.go:181] (0xc000889a40) (5) Data frame handling\nI1027 11:43:27.029944 2369 log.go:181] (0xc000889a40) (5) Data frame sent\nI1027 11:43:27.029951 2369 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:43:27.029957 2369 log.go:181] (0xc000889a40) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.71.174 80\nConnection to 10.97.71.174 80 port [tcp/http] succeeded!\nI1027 11:43:27.031522 2369 log.go:181] (0xc00003a420) Data frame received for 1\nI1027 11:43:27.031602 2369 log.go:181] (0xc000e12000) (1) Data frame handling\nI1027 11:43:27.031667 2369 log.go:181] (0xc000e12000) (1) Data frame sent\nI1027 11:43:27.031697 2369 log.go:181] (0xc00003a420) (0xc000e12000) Stream removed, broadcasting: 1\nI1027 11:43:27.031906 2369 log.go:181] (0xc00003a420) Go away received\nI1027 11:43:27.032100 2369 log.go:181] (0xc00003a420) (0xc000e12000) Stream removed, broadcasting: 1\nI1027 11:43:27.032120 2369 log.go:181] (0xc00003a420) (0xc000e120a0) Stream removed, broadcasting: 3\nI1027 11:43:27.032133 2369 log.go:181] (0xc00003a420) (0xc000889a40) Stream removed, broadcasting: 5\n" Oct 27 11:43:27.037: INFO: stdout: "" Oct 27 11:43:27.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2489 execpod-affinityf744w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.71.174:80/ ; done' Oct 27 11:43:27.389: INFO: stderr: "I1027 11:43:27.194221 2387 log.go:181] (0xc0005b76b0) (0xc000c0c780) Create stream\nI1027 11:43:27.194289 2387 log.go:181] (0xc0005b76b0) (0xc000c0c780) Stream added, broadcasting: 1\nI1027 11:43:27.199309 2387 log.go:181] (0xc0005b76b0) Reply frame received for 1\nI1027 11:43:27.199371 2387 log.go:181] (0xc0005b76b0) (0xc000b8e0a0) Create stream\nI1027 11:43:27.199393 2387 log.go:181] (0xc0005b76b0) (0xc000b8e0a0) Stream added, broadcasting: 3\nI1027 11:43:27.200459 2387 log.go:181] (0xc0005b76b0) Reply frame received for 3\nI1027 11:43:27.200506 2387 log.go:181] (0xc0005b76b0) (0xc000b8e140) Create stream\nI1027 11:43:27.200528 2387 log.go:181] (0xc0005b76b0) (0xc000b8e140) Stream added, broadcasting: 5\nI1027 11:43:27.201571 2387 log.go:181] (0xc0005b76b0) Reply frame received for 5\nI1027 11:43:27.272039 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.272091 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.272117 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.272164 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.272187 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.272205 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.279719 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.279743 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.279763 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.280657 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.280687 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.280696 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.280737 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.280770 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.280817 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.286128 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.286152 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.286173 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.286878 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.286918 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.286938 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.286963 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.286981 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.287005 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.296080 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.296104 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.296124 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.297195 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.297214 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.297234 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.297268 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.297284 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.297303 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.301552 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.301566 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.301573 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.302524 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.302545 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.302553 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.302578 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.302599 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.302616 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.309662 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.309679 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.309692 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.310456 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.310477 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.310489 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.310498 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.310505 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.310514 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.316208 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.316243 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.316275 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.317019 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.317065 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.317085 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.317105 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.317119 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.317144 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.323324 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.323344 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.323370 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.324018 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.324031 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.324037 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.324064 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.324081 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.324101 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.331091 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.331118 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.331139 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.331751 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.331783 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.331813 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.331826 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.331840 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.331849 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\nI1027 11:43:27.331858 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.331868 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.331890 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\nI1027 11:43:27.335412 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.335430 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.335446 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.335866 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.335903 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.335916 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.335929 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.335935 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.335942 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.339776 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.339796 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.339806 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.340758 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.340791 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.340815 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.340830 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.340930 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.340946 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.346443 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.346483 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.346511 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.347914 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.347938 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.347953 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.348015 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.348040 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.348058 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.354055 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.354073 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.354085 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.354736 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.354765 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.354788 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.354817 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.354828 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.354847 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.359129 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.359145 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.359158 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.359575 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.359592 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.359622 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.359651 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.359665 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.359685 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.365463 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.365485 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.365502 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.366022 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.366035 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.366051 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.366084 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.366097 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.366112 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.371800 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.371819 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.371834 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.372427 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.372460 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.372470 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.372495 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.372524 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.372551 2387 log.go:181] (0xc000b8e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.377876 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.377907 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.377938 2387 log.go:181] (0xc000b8e0a0) (3) Data frame sent\nI1027 11:43:27.378734 2387 log.go:181] (0xc0005b76b0) Data frame received for 3\nI1027 11:43:27.378768 2387 log.go:181] (0xc000b8e0a0) (3) Data frame handling\nI1027 11:43:27.378809 2387 log.go:181] (0xc0005b76b0) Data frame received for 5\nI1027 11:43:27.378841 2387 log.go:181] (0xc000b8e140) (5) Data frame handling\nI1027 11:43:27.380594 2387 log.go:181] (0xc0005b76b0) Data frame received for 1\nI1027 11:43:27.380636 2387 log.go:181] (0xc000c0c780) (1) Data frame handling\nI1027 11:43:27.380669 2387 log.go:181] (0xc000c0c780) (1) Data frame sent\nI1027 11:43:27.380719 2387 log.go:181] (0xc0005b76b0) (0xc000c0c780) Stream removed, broadcasting: 1\nI1027 11:43:27.380758 2387 log.go:181] (0xc0005b76b0) Go away received\nI1027 11:43:27.381322 2387 log.go:181] (0xc0005b76b0) (0xc000c0c780) Stream removed, broadcasting: 1\nI1027 11:43:27.381353 2387 log.go:181] (0xc0005b76b0) (0xc000b8e0a0) Stream removed, broadcasting: 3\nI1027 11:43:27.381365 2387 log.go:181] (0xc0005b76b0) (0xc000b8e140) Stream removed, broadcasting: 5\n" Oct 27 11:43:27.390: INFO: stdout: "\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-8v5sv\naffinity-clusterip-transition-wprbk\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-wprbk" Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-8v5sv Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-wprbk Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.390: INFO: Received response from host: affinity-clusterip-transition-wprbk Oct 27 11:43:27.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2489 execpod-affinityf744w -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.71.174:80/ ; done' Oct 27 11:43:27.716: INFO: stderr: "I1027 11:43:27.548803 2405 log.go:181] (0xc0009b8000) (0xc0009e4000) Create stream\nI1027 11:43:27.548959 2405 log.go:181] (0xc0009b8000) (0xc0009e4000) Stream added, broadcasting: 1\nI1027 11:43:27.550681 2405 log.go:181] (0xc0009b8000) Reply frame received for 1\nI1027 11:43:27.550713 2405 log.go:181] (0xc0009b8000) (0xc000742140) Create stream\nI1027 11:43:27.550722 2405 log.go:181] (0xc0009b8000) (0xc000742140) Stream added, broadcasting: 3\nI1027 11:43:27.551334 2405 log.go:181] (0xc0009b8000) Reply frame received for 3\nI1027 11:43:27.551364 2405 log.go:181] (0xc0009b8000) (0xc000743220) Create stream\nI1027 11:43:27.551388 2405 log.go:181] (0xc0009b8000) (0xc000743220) Stream added, broadcasting: 5\nI1027 11:43:27.552117 2405 log.go:181] (0xc0009b8000) Reply frame received for 5\nI1027 11:43:27.614532 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.614559 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.614567 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.614578 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.614582 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.614588 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.615199 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.615217 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.615229 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.615559 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.615594 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.615610 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.615620 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.615631 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.615637 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.619238 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.619260 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.619274 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.619620 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.619646 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.619657 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.619671 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.619685 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.619692 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.623360 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.623382 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.623396 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.624032 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.624049 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.624057 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.624122 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.624133 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.624144 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.629402 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.629416 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.629430 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.629933 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.629956 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.629976 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.629988 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.629995 2405 log.go:181] (0xc000743220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.630017 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.630036 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.630058 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.630071 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.634745 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.634759 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.634769 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.635374 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.635395 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.635424 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.635437 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.635445 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.635453 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.641990 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.642009 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.642028 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.642944 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.642984 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.643004 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.643029 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.643039 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.643056 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.649179 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.649203 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.649220 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.650078 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.650101 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.650123 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.650140 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.650152 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.650161 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.656573 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.656609 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.656638 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.657366 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.657392 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.657413 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.657427 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.657445 2405 log.go:181] (0xc000743220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.657474 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.657569 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.657583 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.657596 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.664171 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.664205 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.664238 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.665093 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.665117 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.665164 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.665209 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.665221 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.665233 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.672820 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.672993 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.673030 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.673755 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.673777 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.673784 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.673793 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.673798 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.673803 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.678193 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.678212 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.678226 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.678698 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.678709 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.678722 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.678729 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.678733 2405 log.go:181] (0xc000743220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.678746 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.678820 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.678832 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.678841 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.686088 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.686167 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.686189 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.686217 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.686243 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.686258 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.686267 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.686279 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.686307 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.691159 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.691175 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.691183 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.691723 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.691764 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.691781 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.691797 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.691807 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.691815 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.696393 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.696410 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.696427 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.697033 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.697087 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.697111 2405 log.go:181] (0xc000743220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/I1027 11:43:27.697193 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.697217 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.697236 2405 log.go:181] (0xc000743220) (5) Data frame sent\n\nI1027 11:43:27.697608 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.697623 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.697631 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.701864 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.701880 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.701890 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.702463 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.702475 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.702481 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.702488 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.702494 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.702500 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.702505 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.702510 2405 log.go:181] (0xc000743220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.71.174:80/\nI1027 11:43:27.702521 2405 log.go:181] (0xc000743220) (5) Data frame sent\nI1027 11:43:27.707669 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.707690 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.707705 2405 log.go:181] (0xc000742140) (3) Data frame sent\nI1027 11:43:27.708265 2405 log.go:181] (0xc0009b8000) Data frame received for 5\nI1027 11:43:27.708287 2405 log.go:181] (0xc000743220) (5) Data frame handling\nI1027 11:43:27.708310 2405 log.go:181] (0xc0009b8000) Data frame received for 3\nI1027 11:43:27.708329 2405 log.go:181] (0xc000742140) (3) Data frame handling\nI1027 11:43:27.710210 2405 log.go:181] (0xc0009b8000) Data frame received for 1\nI1027 11:43:27.710242 2405 log.go:181] (0xc0009e4000) (1) Data frame handling\nI1027 11:43:27.710271 2405 log.go:181] (0xc0009e4000) (1) Data frame sent\nI1027 11:43:27.710335 2405 log.go:181] (0xc0009b8000) (0xc0009e4000) Stream removed, broadcasting: 1\nI1027 11:43:27.710368 2405 log.go:181] (0xc0009b8000) Go away received\nI1027 11:43:27.710765 2405 log.go:181] (0xc0009b8000) (0xc0009e4000) Stream removed, broadcasting: 1\nI1027 11:43:27.710781 2405 log.go:181] (0xc0009b8000) (0xc000742140) Stream removed, broadcasting: 3\nI1027 11:43:27.710787 2405 log.go:181] (0xc0009b8000) (0xc000743220) Stream removed, broadcasting: 5\n" Oct 27 11:43:27.717: INFO: stdout: "\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm\naffinity-clusterip-transition-z94gm" Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Received response from host: affinity-clusterip-transition-z94gm Oct 27 11:43:27.717: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2489, will wait for the garbage collector to delete the pods Oct 27 11:43:27.918: INFO: Deleting ReplicationController affinity-clusterip-transition took: 110.267925ms Oct 27 11:43:28.418: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.250368ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:43:38.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2489" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:22.950 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":218,"skipped":3510,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:43:38.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 27 11:43:38.391: INFO: Waiting up to 1m0s for all nodes to be ready Oct 27 11:44:38.416: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 27 11:44:38.473: INFO: Created pod: pod0-sched-preemption-low-priority Oct 27 11:44:38.581: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:00.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3770" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:82.442 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":219,"skipped":3512,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:00.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 27 11:45:07.557: INFO: Successfully updated pod "adopt-release-ck2dv" STEP: Checking that the Job readopts the Pod Oct 27 11:45:07.557: INFO: Waiting up to 15m0s for pod "adopt-release-ck2dv" in namespace "job-9402" to be "adopted" Oct 27 11:45:07.571: INFO: Pod "adopt-release-ck2dv": Phase="Running", Reason="", readiness=true. Elapsed: 14.202275ms Oct 27 11:45:09.576: INFO: Pod "adopt-release-ck2dv": Phase="Running", Reason="", readiness=true. Elapsed: 2.019176704s Oct 27 11:45:09.576: INFO: Pod "adopt-release-ck2dv" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 27 11:45:10.086: INFO: Successfully updated pod "adopt-release-ck2dv" STEP: Checking that the Job releases the Pod Oct 27 11:45:10.086: INFO: Waiting up to 15m0s for pod "adopt-release-ck2dv" in namespace "job-9402" to be "released" Oct 27 11:45:10.124: INFO: Pod "adopt-release-ck2dv": Phase="Running", Reason="", readiness=true. Elapsed: 38.287802ms Oct 27 11:45:12.149: INFO: Pod "adopt-release-ck2dv": Phase="Running", Reason="", readiness=true. Elapsed: 2.063570512s Oct 27 11:45:12.149: INFO: Pod "adopt-release-ck2dv" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:12.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9402" for this suite. • [SLOW TEST:11.576 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":220,"skipped":3522,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:12.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Oct 27 11:45:12.388: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 27 11:45:12.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:15.833: INFO: stderr: "" Oct 27 11:45:15.833: INFO: stdout: "service/agnhost-replica created\n" Oct 27 11:45:15.833: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 27 11:45:15.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:16.516: INFO: stderr: "" Oct 27 11:45:16.516: INFO: stdout: "service/agnhost-primary created\n" Oct 27 11:45:16.517: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 27 11:45:16.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:16.828: INFO: stderr: "" Oct 27 11:45:16.828: INFO: stdout: "service/frontend created\n" Oct 27 11:45:16.828: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 27 11:45:16.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:17.112: INFO: stderr: "" Oct 27 11:45:17.112: INFO: stdout: "deployment.apps/frontend created\n" Oct 27 11:45:17.112: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 27 11:45:17.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:17.519: INFO: stderr: "" Oct 27 11:45:17.519: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 27 11:45:17.520: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 27 11:45:17.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2769' Oct 27 11:45:17.948: INFO: stderr: "" Oct 27 11:45:17.948: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 27 11:45:17.948: INFO: Waiting for all frontend pods to be Running. Oct 27 11:45:27.999: INFO: Waiting for frontend to serve content. Oct 27 11:45:28.009: INFO: Trying to add a new entry to the guestbook. Oct 27 11:45:28.019: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 27 11:45:28.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:28.170: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:28.170: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 27 11:45:28.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:28.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:28.318: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 27 11:45:28.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:28.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:28.461: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 27 11:45:28.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:28.567: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:28.567: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 27 11:45:28.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:28.671: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:28.671: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 27 11:45:28.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2769' Oct 27 11:45:29.275: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 27 11:45:29.275: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:29.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2769" for this suite. • [SLOW TEST:17.543 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":221,"skipped":3523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:29.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7620849b-a4a9-4cc6-aba2-857c6471c8d1 STEP: Creating a pod to test consume configMaps Oct 27 11:45:31.203: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083" in namespace "projected-7517" to be "Succeeded or Failed" Oct 27 11:45:31.266: INFO: Pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083": Phase="Pending", Reason="", readiness=false. Elapsed: 62.8471ms Oct 27 11:45:33.270: INFO: Pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066875387s Oct 27 11:45:35.317: INFO: Pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113880832s Oct 27 11:45:37.323: INFO: Pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119705542s STEP: Saw pod success Oct 27 11:45:37.323: INFO: Pod "pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083" satisfied condition "Succeeded or Failed" Oct 27 11:45:37.326: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083 container projected-configmap-volume-test: STEP: delete the pod Oct 27 11:45:37.355: INFO: Waiting for pod pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083 to disappear Oct 27 11:45:37.359: INFO: Pod pod-projected-configmaps-0cd4c7eb-4d96-4beb-87e2-763360fe0083 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:37.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7517" for this suite. • [SLOW TEST:7.527 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3546,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:37.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 27 11:45:37.470: INFO: Waiting up to 5m0s for pod "pod-b537f808-31d5-4050-ab88-9d42d2a084a7" in namespace "emptydir-2292" to be "Succeeded or Failed" Oct 27 11:45:37.473: INFO: Pod "pod-b537f808-31d5-4050-ab88-9d42d2a084a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.737715ms Oct 27 11:45:39.478: INFO: Pod "pod-b537f808-31d5-4050-ab88-9d42d2a084a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007763665s Oct 27 11:45:41.482: INFO: Pod "pod-b537f808-31d5-4050-ab88-9d42d2a084a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011877847s STEP: Saw pod success Oct 27 11:45:41.482: INFO: Pod "pod-b537f808-31d5-4050-ab88-9d42d2a084a7" satisfied condition "Succeeded or Failed" Oct 27 11:45:41.485: INFO: Trying to get logs from node kali-worker pod pod-b537f808-31d5-4050-ab88-9d42d2a084a7 container test-container: STEP: delete the pod Oct 27 11:45:41.518: INFO: Waiting for pod pod-b537f808-31d5-4050-ab88-9d42d2a084a7 to disappear Oct 27 11:45:41.527: INFO: Pod pod-b537f808-31d5-4050-ab88-9d42d2a084a7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:41.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2292" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:41.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 27 11:45:41.612: INFO: Waiting up to 5m0s for pod "pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef" in namespace "emptydir-8136" to be "Succeeded or Failed" Oct 27 11:45:41.630: INFO: Pod "pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef": Phase="Pending", Reason="", readiness=false. Elapsed: 17.605488ms Oct 27 11:45:43.634: INFO: Pod "pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022318618s Oct 27 11:45:45.638: INFO: Pod "pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026187408s STEP: Saw pod success Oct 27 11:45:45.638: INFO: Pod "pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef" satisfied condition "Succeeded or Failed" Oct 27 11:45:45.641: INFO: Trying to get logs from node kali-worker2 pod pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef container test-container: STEP: delete the pod Oct 27 11:45:45.681: INFO: Waiting for pod pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef to disappear Oct 27 11:45:45.690: INFO: Pod pod-27f5b7fb-c61f-4fdc-aa2f-9aef29429eef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:45:45.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8136" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3611,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:45:45.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1683 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 27 11:45:45.801: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 27 11:45:45.860: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:45:47.864: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:45:50.025: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:45:51.865: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:45:53.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:45:55.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:45:57.863: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:45:59.864: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:46:01.865: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:46:03.865: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 27 11:46:03.871: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 27 11:46:05.876: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 27 11:46:09.905: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.171 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:46:09.905: INFO: >>> kubeConfig: /root/.kube/config I1027 11:46:09.932943 7 log.go:181] (0xc0005ee4d0) (0xc0043306e0) Create stream I1027 11:46:09.932969 7 log.go:181] (0xc0005ee4d0) (0xc0043306e0) Stream added, broadcasting: 1 I1027 11:46:09.937111 7 log.go:181] (0xc0005ee4d0) Reply frame received for 1 I1027 11:46:09.937156 7 log.go:181] (0xc0005ee4d0) (0xc000d9e000) Create stream I1027 11:46:09.937164 7 log.go:181] (0xc0005ee4d0) (0xc000d9e000) Stream added, broadcasting: 3 I1027 11:46:09.938139 7 log.go:181] (0xc0005ee4d0) Reply frame received for 3 I1027 11:46:09.938180 7 log.go:181] (0xc0005ee4d0) (0xc0010fe640) Create stream I1027 11:46:09.938197 7 log.go:181] (0xc0005ee4d0) (0xc0010fe640) Stream added, broadcasting: 5 I1027 11:46:09.939171 7 log.go:181] (0xc0005ee4d0) Reply frame received for 5 I1027 11:46:11.009454 7 log.go:181] (0xc0005ee4d0) Data frame received for 3 I1027 11:46:11.009507 7 log.go:181] (0xc000d9e000) (3) Data frame handling I1027 11:46:11.009634 7 log.go:181] (0xc000d9e000) (3) Data frame sent I1027 11:46:11.009704 7 log.go:181] (0xc0005ee4d0) Data frame received for 3 I1027 11:46:11.009744 7 log.go:181] (0xc000d9e000) (3) Data frame handling I1027 11:46:11.009812 7 log.go:181] (0xc0005ee4d0) Data frame received for 5 I1027 11:46:11.009861 7 log.go:181] (0xc0010fe640) (5) Data frame handling I1027 11:46:11.011479 7 log.go:181] (0xc0005ee4d0) Data frame received for 1 I1027 11:46:11.011555 7 log.go:181] (0xc0043306e0) (1) Data frame handling I1027 11:46:11.011603 7 log.go:181] (0xc0043306e0) (1) Data frame sent I1027 11:46:11.011638 7 log.go:181] (0xc0005ee4d0) (0xc0043306e0) Stream removed, broadcasting: 1 I1027 11:46:11.011668 7 log.go:181] (0xc0005ee4d0) Go away received I1027 11:46:11.011786 7 log.go:181] (0xc0005ee4d0) (0xc0043306e0) Stream removed, broadcasting: 1 I1027 11:46:11.011806 7 log.go:181] (0xc0005ee4d0) (0xc000d9e000) Stream removed, broadcasting: 3 I1027 11:46:11.011815 7 log.go:181] (0xc0005ee4d0) (0xc0010fe640) Stream removed, broadcasting: 5 Oct 27 11:46:11.011: INFO: Found all expected endpoints: [netserver-0] Oct 27 11:46:11.015: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.120 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1683 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:46:11.015: INFO: >>> kubeConfig: /root/.kube/config I1027 11:46:11.044817 7 log.go:181] (0xc000e1c9a0) (0xc000d9f680) Create stream I1027 11:46:11.044935 7 log.go:181] (0xc000e1c9a0) (0xc000d9f680) Stream added, broadcasting: 1 I1027 11:46:11.047016 7 log.go:181] (0xc000e1c9a0) Reply frame received for 1 I1027 11:46:11.047035 7 log.go:181] (0xc000e1c9a0) (0xc000d9f7c0) Create stream I1027 11:46:11.047049 7 log.go:181] (0xc000e1c9a0) (0xc000d9f7c0) Stream added, broadcasting: 3 I1027 11:46:11.048006 7 log.go:181] (0xc000e1c9a0) Reply frame received for 3 I1027 11:46:11.048036 7 log.go:181] (0xc000e1c9a0) (0xc000533540) Create stream I1027 11:46:11.048052 7 log.go:181] (0xc000e1c9a0) (0xc000533540) Stream added, broadcasting: 5 I1027 11:46:11.049264 7 log.go:181] (0xc000e1c9a0) Reply frame received for 5 I1027 11:46:12.126044 7 log.go:181] (0xc000e1c9a0) Data frame received for 3 I1027 11:46:12.126078 7 log.go:181] (0xc000d9f7c0) (3) Data frame handling I1027 11:46:12.126099 7 log.go:181] (0xc000d9f7c0) (3) Data frame sent I1027 11:46:12.126306 7 log.go:181] (0xc000e1c9a0) Data frame received for 5 I1027 11:46:12.126344 7 log.go:181] (0xc000533540) (5) Data frame handling I1027 11:46:12.126683 7 log.go:181] (0xc000e1c9a0) Data frame received for 3 I1027 11:46:12.126713 7 log.go:181] (0xc000d9f7c0) (3) Data frame handling I1027 11:46:12.129406 7 log.go:181] (0xc000e1c9a0) Data frame received for 1 I1027 11:46:12.129438 7 log.go:181] (0xc000d9f680) (1) Data frame handling I1027 11:46:12.129472 7 log.go:181] (0xc000d9f680) (1) Data frame sent I1027 11:46:12.129543 7 log.go:181] (0xc000e1c9a0) (0xc000d9f680) Stream removed, broadcasting: 1 I1027 11:46:12.129612 7 log.go:181] (0xc000e1c9a0) Go away received I1027 11:46:12.129662 7 log.go:181] (0xc000e1c9a0) (0xc000d9f680) Stream removed, broadcasting: 1 I1027 11:46:12.129735 7 log.go:181] (0xc000e1c9a0) (0xc000d9f7c0) Stream removed, broadcasting: 3 I1027 11:46:12.129797 7 log.go:181] (0xc000e1c9a0) (0xc000533540) Stream removed, broadcasting: 5 Oct 27 11:46:12.129: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:12.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1683" for this suite. • [SLOW TEST:26.441 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":225,"skipped":3616,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:12.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Oct 27 11:46:12.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config api-versions' Oct 27 11:46:12.466: INFO: stderr: "" Oct 27 11:46:12.466: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:12.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6060" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":226,"skipped":3616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:12.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2047 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2047 to expose endpoints map[] Oct 27 11:46:12.610: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Oct 27 11:46:13.619: INFO: successfully validated that service endpoint-test2 in namespace services-2047 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2047 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2047 to expose endpoints map[pod1:[80]] Oct 27 11:46:16.745: INFO: successfully validated that service endpoint-test2 in namespace services-2047 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2047 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2047 to expose endpoints map[pod1:[80] pod2:[80]] Oct 27 11:46:20.821: INFO: Unexpected endpoints: found map[ae8c6470-9b31-4e09-be5a-a1d4331f7810:[80]], expected map[pod1:[80] pod2:[80]], will retry Oct 27 11:46:21.825: INFO: successfully validated that service endpoint-test2 in namespace services-2047 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2047 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2047 to expose endpoints map[pod2:[80]] Oct 27 11:46:21.879: INFO: successfully validated that service endpoint-test2 in namespace services-2047 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2047 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2047 to expose endpoints map[] Oct 27 11:46:22.938: INFO: successfully validated that service endpoint-test2 in namespace services-2047 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:22.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2047" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.511 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":227,"skipped":3648,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:22.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:46:23.406: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656" in namespace "security-context-test-1028" to be "Succeeded or Failed" Oct 27 11:46:23.409: INFO: Pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160735ms Oct 27 11:46:25.419: INFO: Pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013437359s Oct 27 11:46:27.773: INFO: Pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.366969973s Oct 27 11:46:27.773: INFO: Pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656" satisfied condition "Succeeded or Failed" Oct 27 11:46:27.786: INFO: Got logs for pod "busybox-privileged-false-91f88e78-bc75-402e-9b09-a82f26366656": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:27.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1028" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":228,"skipped":3660,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:27.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 27 11:46:27.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-665' Oct 27 11:46:27.996: INFO: stderr: "" Oct 27 11:46:27.996: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 27 11:46:27.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-665' Oct 27 11:46:28.096: INFO: stderr: "" Oct 27 11:46:28.096: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-27T11:46:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-27T11:46:27Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-665\",\n \"resourceVersion\": \"8982296\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-665/pods/e2e-test-httpd-pod\",\n \"uid\": \"8f75c5ef-0bc5-41ea-a5c8-4eb439adf3b5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-458df\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-458df\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-458df\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-27T11:46:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Oct 27 11:46:28.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-665' Oct 27 11:46:28.389: INFO: stderr: "W1027 11:46:28.158700 2689 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Oct 27 11:46:28.389: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Oct 27 11:46:28.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-665' Oct 27 11:46:30.953: INFO: stderr: "" Oct 27 11:46:30.953: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:30.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-665" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":229,"skipped":3665,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:30.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Oct 27 11:46:35.079: INFO: Pod pod-hostip-bd205439-00b3-4544-9350-705bab87ece2 has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6501" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3671,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:35.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:46:35.178: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 27 11:46:38.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4102 create -f -' Oct 27 11:46:43.191: INFO: stderr: "" Oct 27 11:46:43.191: INFO: stdout: "e2e-test-crd-publish-openapi-6974-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 27 11:46:43.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4102 delete e2e-test-crd-publish-openapi-6974-crds test-cr' Oct 27 11:46:43.296: INFO: stderr: "" Oct 27 11:46:43.296: INFO: stdout: "e2e-test-crd-publish-openapi-6974-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 27 11:46:43.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4102 apply -f -' Oct 27 11:46:43.581: INFO: stderr: "" Oct 27 11:46:43.581: INFO: stdout: "e2e-test-crd-publish-openapi-6974-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 27 11:46:43.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4102 delete e2e-test-crd-publish-openapi-6974-crds test-cr' Oct 27 11:46:43.710: INFO: stderr: "" Oct 27 11:46:43.710: INFO: stdout: "e2e-test-crd-publish-openapi-6974-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 27 11:46:43.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6974-crds' Oct 27 11:46:44.029: INFO: stderr: "" Oct 27 11:46:44.029: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6974-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:46.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4102" for this suite. • [SLOW TEST:11.909 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":231,"skipped":3693,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:46.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:47.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3214" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":232,"skipped":3700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:47.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:46:47.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341" in namespace "downward-api-873" to be "Succeeded or Failed" Oct 27 11:46:47.154: INFO: Pod "downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341": Phase="Pending", Reason="", readiness=false. Elapsed: 9.124176ms Oct 27 11:46:49.159: INFO: Pod "downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013857861s Oct 27 11:46:51.163: INFO: Pod "downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018486193s STEP: Saw pod success Oct 27 11:46:51.163: INFO: Pod "downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341" satisfied condition "Succeeded or Failed" Oct 27 11:46:51.166: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341 container client-container: STEP: delete the pod Oct 27 11:46:51.727: INFO: Waiting for pod downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341 to disappear Oct 27 11:46:51.875: INFO: Pod downwardapi-volume-088c387f-18c9-40c3-be52-d5f6e4da9341 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:46:51.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-873" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3751,"failed":0} SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:46:51.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:46:56.192: INFO: Waiting up to 5m0s for pod "client-envvars-52466100-89d2-42eb-a815-302c34deca46" in namespace "pods-2489" to be "Succeeded or Failed" Oct 27 11:46:56.205: INFO: Pod "client-envvars-52466100-89d2-42eb-a815-302c34deca46": Phase="Pending", Reason="", readiness=false. Elapsed: 13.347852ms Oct 27 11:46:58.270: INFO: Pod "client-envvars-52466100-89d2-42eb-a815-302c34deca46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078688267s Oct 27 11:47:00.275: INFO: Pod "client-envvars-52466100-89d2-42eb-a815-302c34deca46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08296601s STEP: Saw pod success Oct 27 11:47:00.275: INFO: Pod "client-envvars-52466100-89d2-42eb-a815-302c34deca46" satisfied condition "Succeeded or Failed" Oct 27 11:47:00.278: INFO: Trying to get logs from node kali-worker pod client-envvars-52466100-89d2-42eb-a815-302c34deca46 container env3cont: STEP: delete the pod Oct 27 11:47:00.298: INFO: Waiting for pod client-envvars-52466100-89d2-42eb-a815-302c34deca46 to disappear Oct 27 11:47:00.303: INFO: Pod client-envvars-52466100-89d2-42eb-a815-302c34deca46 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:47:00.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2489" for this suite. • [SLOW TEST:8.401 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:47:00.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 27 11:47:10.491: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 27 11:47:10.528: INFO: Pod pod-with-poststart-exec-hook still exists Oct 27 11:47:12.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 27 11:47:12.532: INFO: Pod pod-with-poststart-exec-hook still exists Oct 27 11:47:14.528: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 27 11:47:14.533: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:47:14.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9572" for this suite. • [SLOW TEST:14.228 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:47:14.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 27 11:47:14.647: INFO: Waiting up to 5m0s for pod "downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b" in namespace "downward-api-8783" to be "Succeeded or Failed" Oct 27 11:47:14.652: INFO: Pod "downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210858ms Oct 27 11:47:16.655: INFO: Pod "downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007821266s Oct 27 11:47:18.660: INFO: Pod "downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012295965s STEP: Saw pod success Oct 27 11:47:18.660: INFO: Pod "downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b" satisfied condition "Succeeded or Failed" Oct 27 11:47:18.663: INFO: Trying to get logs from node kali-worker2 pod downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b container dapi-container: STEP: delete the pod Oct 27 11:47:18.695: INFO: Waiting for pod downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b to disappear Oct 27 11:47:18.704: INFO: Pod downward-api-f8c6c297-ef0d-4ecd-af78-f865752ea70b no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:47:18.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8783" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":3827,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:47:18.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Oct 27 11:47:18.781: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:47:18.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1716" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":237,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:47:18.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:47:19.390: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:47:21.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396039, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396039, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396039, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396039, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:47:24.656: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:47:24.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3736-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:47:25.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1546" for this suite. STEP: Destroying namespace "webhook-1546-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.924 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":238,"skipped":3895,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:47:25.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 11:47:34.005: INFO: DNS probes using dns-test-9d96bd27-4a04-4206-8806-267ba165dc89 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 11:47:42.148: INFO: File wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:42.155: INFO: File jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:42.155: INFO: Lookups using dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec failed for: [wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local] Oct 27 11:47:47.160: INFO: File wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:47.165: INFO: File jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:47.165: INFO: Lookups using dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec failed for: [wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local] Oct 27 11:47:52.160: INFO: File wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:52.165: INFO: File jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:52.165: INFO: Lookups using dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec failed for: [wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local] Oct 27 11:47:57.161: INFO: File wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:57.165: INFO: File jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local from pod dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 27 11:47:57.165: INFO: Lookups using dns-2594/dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec failed for: [wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local] Oct 27 11:48:02.166: INFO: DNS probes using dns-test-93d204e6-1dcc-442a-a35a-5e432efde8ec succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2594.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2594.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 11:48:08.975: INFO: DNS probes using dns-test-00d696e4-f8cc-495f-bddf-8413bd42f75a succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:09.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2594" for this suite. • [SLOW TEST:43.280 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":239,"skipped":3897,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:09.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f817bbc2-4f85-483b-9e08-fa15d5388c3e STEP: Creating a pod to test consume secrets Oct 27 11:48:09.559: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598" in namespace "projected-6858" to be "Succeeded or Failed" Oct 27 11:48:09.582: INFO: Pod "pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598": Phase="Pending", Reason="", readiness=false. Elapsed: 22.784411ms Oct 27 11:48:11.586: INFO: Pod "pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026525474s Oct 27 11:48:13.591: INFO: Pod "pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031248151s STEP: Saw pod success Oct 27 11:48:13.591: INFO: Pod "pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598" satisfied condition "Succeeded or Failed" Oct 27 11:48:13.594: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598 container projected-secret-volume-test: STEP: delete the pod Oct 27 11:48:13.635: INFO: Waiting for pod pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598 to disappear Oct 27 11:48:13.643: INFO: Pod pod-projected-secrets-6ef2e3dc-5bb3-417d-8bb2-0dfddf929598 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:13.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6858" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3900,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:13.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3067 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3067 I1027 11:48:13.871322 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3067, replica count: 2 I1027 11:48:16.921822 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:48:19.922066 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:48:19.922: INFO: Creating new exec pod Oct 27 11:48:24.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3067 execpodm7wqz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 27 11:48:25.169: INFO: stderr: "I1027 11:48:25.094060 2835 log.go:181] (0xc00003a420) (0xc000a30780) Create stream\nI1027 11:48:25.094136 2835 log.go:181] (0xc00003a420) (0xc000a30780) Stream added, broadcasting: 1\nI1027 11:48:25.098051 2835 log.go:181] (0xc00003a420) Reply frame received for 1\nI1027 11:48:25.098120 2835 log.go:181] (0xc00003a420) (0xc0007901e0) Create stream\nI1027 11:48:25.098142 2835 log.go:181] (0xc00003a420) (0xc0007901e0) Stream added, broadcasting: 3\nI1027 11:48:25.099013 2835 log.go:181] (0xc00003a420) Reply frame received for 3\nI1027 11:48:25.099054 2835 log.go:181] (0xc00003a420) (0xc000a30c80) Create stream\nI1027 11:48:25.099068 2835 log.go:181] (0xc00003a420) (0xc000a30c80) Stream added, broadcasting: 5\nI1027 11:48:25.100120 2835 log.go:181] (0xc00003a420) Reply frame received for 5\nI1027 11:48:25.158341 2835 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:48:25.158372 2835 log.go:181] (0xc000a30c80) (5) Data frame handling\nI1027 11:48:25.158391 2835 log.go:181] (0xc000a30c80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1027 11:48:25.159008 2835 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:48:25.159038 2835 log.go:181] (0xc000a30c80) (5) Data frame handling\nI1027 11:48:25.159066 2835 log.go:181] (0xc000a30c80) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1027 11:48:25.159380 2835 log.go:181] (0xc00003a420) Data frame received for 3\nI1027 11:48:25.159393 2835 log.go:181] (0xc0007901e0) (3) Data frame handling\nI1027 11:48:25.159484 2835 log.go:181] (0xc00003a420) Data frame received for 5\nI1027 11:48:25.159507 2835 log.go:181] (0xc000a30c80) (5) Data frame handling\nI1027 11:48:25.161455 2835 log.go:181] (0xc00003a420) Data frame received for 1\nI1027 11:48:25.161474 2835 log.go:181] (0xc000a30780) (1) Data frame handling\nI1027 11:48:25.161485 2835 log.go:181] (0xc000a30780) (1) Data frame sent\nI1027 11:48:25.161600 2835 log.go:181] (0xc00003a420) (0xc000a30780) Stream removed, broadcasting: 1\nI1027 11:48:25.161727 2835 log.go:181] (0xc00003a420) Go away received\nI1027 11:48:25.162166 2835 log.go:181] (0xc00003a420) (0xc000a30780) Stream removed, broadcasting: 1\nI1027 11:48:25.162204 2835 log.go:181] (0xc00003a420) (0xc0007901e0) Stream removed, broadcasting: 3\nI1027 11:48:25.162215 2835 log.go:181] (0xc00003a420) (0xc000a30c80) Stream removed, broadcasting: 5\n" Oct 27 11:48:25.169: INFO: stdout: "" Oct 27 11:48:25.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3067 execpodm7wqz -- /bin/sh -x -c nc -zv -t -w 2 10.96.176.50 80' Oct 27 11:48:25.397: INFO: stderr: "I1027 11:48:25.301402 2853 log.go:181] (0xc000e92fd0) (0xc0004635e0) Create stream\nI1027 11:48:25.301460 2853 log.go:181] (0xc000e92fd0) (0xc0004635e0) Stream added, broadcasting: 1\nI1027 11:48:25.306591 2853 log.go:181] (0xc000e92fd0) Reply frame received for 1\nI1027 11:48:25.306634 2853 log.go:181] (0xc000e92fd0) (0xc000508640) Create stream\nI1027 11:48:25.306647 2853 log.go:181] (0xc000e92fd0) (0xc000508640) Stream added, broadcasting: 3\nI1027 11:48:25.307535 2853 log.go:181] (0xc000e92fd0) Reply frame received for 3\nI1027 11:48:25.307570 2853 log.go:181] (0xc000e92fd0) (0xc000463e00) Create stream\nI1027 11:48:25.307584 2853 log.go:181] (0xc000e92fd0) (0xc000463e00) Stream added, broadcasting: 5\nI1027 11:48:25.308540 2853 log.go:181] (0xc000e92fd0) Reply frame received for 5\nI1027 11:48:25.390147 2853 log.go:181] (0xc000e92fd0) Data frame received for 5\nI1027 11:48:25.390184 2853 log.go:181] (0xc000463e00) (5) Data frame handling\nI1027 11:48:25.390199 2853 log.go:181] (0xc000463e00) (5) Data frame sent\nI1027 11:48:25.390208 2853 log.go:181] (0xc000e92fd0) Data frame received for 5\nI1027 11:48:25.390215 2853 log.go:181] (0xc000463e00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.176.50 80\nConnection to 10.96.176.50 80 port [tcp/http] succeeded!\nI1027 11:48:25.390238 2853 log.go:181] (0xc000e92fd0) Data frame received for 3\nI1027 11:48:25.390248 2853 log.go:181] (0xc000508640) (3) Data frame handling\nI1027 11:48:25.391391 2853 log.go:181] (0xc000e92fd0) Data frame received for 1\nI1027 11:48:25.391419 2853 log.go:181] (0xc0004635e0) (1) Data frame handling\nI1027 11:48:25.391426 2853 log.go:181] (0xc0004635e0) (1) Data frame sent\nI1027 11:48:25.391436 2853 log.go:181] (0xc000e92fd0) (0xc0004635e0) Stream removed, broadcasting: 1\nI1027 11:48:25.391498 2853 log.go:181] (0xc000e92fd0) Go away received\nI1027 11:48:25.391756 2853 log.go:181] (0xc000e92fd0) (0xc0004635e0) Stream removed, broadcasting: 1\nI1027 11:48:25.391779 2853 log.go:181] (0xc000e92fd0) (0xc000508640) Stream removed, broadcasting: 3\nI1027 11:48:25.391788 2853 log.go:181] (0xc000e92fd0) (0xc000463e00) Stream removed, broadcasting: 5\n" Oct 27 11:48:25.397: INFO: stdout: "" Oct 27 11:48:25.397: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:25.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3067" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.791 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":241,"skipped":3909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:25.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 27 11:48:25.531: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 27 11:48:25.537: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 27 11:48:25.537: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 27 11:48:25.586: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 27 11:48:25.586: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 27 11:48:25.627: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 27 11:48:25.627: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 27 11:48:32.863: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:33.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-869" for this suite. • [SLOW TEST:7.959 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":242,"skipped":3932,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:33.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-1c8c2e3c-7fd4-4dc3-ae54-a24485b83487 STEP: Creating a pod to test consume secrets Oct 27 11:48:33.793: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af" in namespace "projected-1444" to be "Succeeded or Failed" Oct 27 11:48:33.865: INFO: Pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af": Phase="Pending", Reason="", readiness=false. Elapsed: 71.718663ms Oct 27 11:48:36.434: INFO: Pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.641407166s Oct 27 11:48:38.451: INFO: Pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.658617425s Oct 27 11:48:40.547: INFO: Pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.754577809s STEP: Saw pod success Oct 27 11:48:40.547: INFO: Pod "pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af" satisfied condition "Succeeded or Failed" Oct 27 11:48:40.551: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af container projected-secret-volume-test: STEP: delete the pod Oct 27 11:48:41.147: INFO: Waiting for pod pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af to disappear Oct 27 11:48:41.158: INFO: Pod pod-projected-secrets-da3b8f0e-e6aa-4a54-b20d-d667632444af no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:41.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1444" for this suite. • [SLOW TEST:7.951 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3932,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:41.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 27 11:48:41.804: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:50.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7239" for this suite. • [SLOW TEST:8.860 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":244,"skipped":3933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:50.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:48:56.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-322" for this suite. • [SLOW TEST:6.457 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":245,"skipped":4006,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:48:56.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:48:58.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:49:00.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 11:49:02.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396138, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:49:05.889: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:49:05.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8750" for this suite. STEP: Destroying namespace "webhook-8750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.337 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":246,"skipped":4010,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:49:06.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-f6a87e42-c189-4649-9d4b-88e49543f6c3 STEP: Creating a pod to test consume secrets Oct 27 11:49:06.108: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a" in namespace "projected-4034" to be "Succeeded or Failed" Oct 27 11:49:06.137: INFO: Pod "pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.83337ms Oct 27 11:49:08.142: INFO: Pod "pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034345496s Oct 27 11:49:10.147: INFO: Pod "pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039643381s STEP: Saw pod success Oct 27 11:49:10.147: INFO: Pod "pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a" satisfied condition "Succeeded or Failed" Oct 27 11:49:10.151: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a container projected-secret-volume-test: STEP: delete the pod Oct 27 11:49:10.185: INFO: Waiting for pod pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a to disappear Oct 27 11:49:10.194: INFO: Pod pod-projected-secrets-5a2a75ca-90ef-44e4-8838-b8932b198a5a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:49:10.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4034" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":4010,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:49:10.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3779 Oct 27 11:49:12.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 27 11:49:12.641: INFO: stderr: "I1027 11:49:12.551830 2871 log.go:181] (0xc000264000) (0xc000aa0820) Create stream\nI1027 11:49:12.551885 2871 log.go:181] (0xc000264000) (0xc000aa0820) Stream added, broadcasting: 1\nI1027 11:49:12.554037 2871 log.go:181] (0xc000264000) Reply frame received for 1\nI1027 11:49:12.554079 2871 log.go:181] (0xc000264000) (0xc0003094a0) Create stream\nI1027 11:49:12.554097 2871 log.go:181] (0xc000264000) (0xc0003094a0) Stream added, broadcasting: 3\nI1027 11:49:12.554926 2871 log.go:181] (0xc000264000) Reply frame received for 3\nI1027 11:49:12.554946 2871 log.go:181] (0xc000264000) (0xc000aa1cc0) Create stream\nI1027 11:49:12.554952 2871 log.go:181] (0xc000264000) (0xc000aa1cc0) Stream added, broadcasting: 5\nI1027 11:49:12.555613 2871 log.go:181] (0xc000264000) Reply frame received for 5\nI1027 11:49:12.627678 2871 log.go:181] (0xc000264000) Data frame received for 5\nI1027 11:49:12.627710 2871 log.go:181] (0xc000aa1cc0) (5) Data frame handling\nI1027 11:49:12.627730 2871 log.go:181] (0xc000aa1cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1027 11:49:12.631419 2871 log.go:181] (0xc000264000) Data frame received for 3\nI1027 11:49:12.631434 2871 log.go:181] (0xc0003094a0) (3) Data frame handling\nI1027 11:49:12.631442 2871 log.go:181] (0xc0003094a0) (3) Data frame sent\nI1027 11:49:12.632044 2871 log.go:181] (0xc000264000) Data frame received for 3\nI1027 11:49:12.632073 2871 log.go:181] (0xc0003094a0) (3) Data frame handling\nI1027 11:49:12.632176 2871 log.go:181] (0xc000264000) Data frame received for 5\nI1027 11:49:12.632211 2871 log.go:181] (0xc000aa1cc0) (5) Data frame handling\nI1027 11:49:12.634128 2871 log.go:181] (0xc000264000) Data frame received for 1\nI1027 11:49:12.634168 2871 log.go:181] (0xc000aa0820) (1) Data frame handling\nI1027 11:49:12.634201 2871 log.go:181] (0xc000aa0820) (1) Data frame sent\nI1027 11:49:12.634228 2871 log.go:181] (0xc000264000) (0xc000aa0820) Stream removed, broadcasting: 1\nI1027 11:49:12.634249 2871 log.go:181] (0xc000264000) Go away received\nI1027 11:49:12.634758 2871 log.go:181] (0xc000264000) (0xc000aa0820) Stream removed, broadcasting: 1\nI1027 11:49:12.634777 2871 log.go:181] (0xc000264000) (0xc0003094a0) Stream removed, broadcasting: 3\nI1027 11:49:12.634786 2871 log.go:181] (0xc000264000) (0xc000aa1cc0) Stream removed, broadcasting: 5\n" Oct 27 11:49:12.641: INFO: stdout: "iptables" Oct 27 11:49:12.641: INFO: proxyMode: iptables Oct 27 11:49:12.647: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:49:12.673: INFO: Pod kube-proxy-mode-detector still exists Oct 27 11:49:14.674: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:49:14.830: INFO: Pod kube-proxy-mode-detector still exists Oct 27 11:49:16.674: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:49:16.678: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3779 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3779 I1027 11:49:16.718557 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3779, replica count: 3 I1027 11:49:19.769004 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:49:22.769265 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:49:22.775: INFO: Creating new exec pod Oct 27 11:49:27.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpod-affinity6rrhd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Oct 27 11:49:28.064: INFO: stderr: "I1027 11:49:27.954613 2889 log.go:181] (0xc0007ab6b0) (0xc000674640) Create stream\nI1027 11:49:27.954687 2889 log.go:181] (0xc0007ab6b0) (0xc000674640) Stream added, broadcasting: 1\nI1027 11:49:27.959978 2889 log.go:181] (0xc0007ab6b0) Reply frame received for 1\nI1027 11:49:27.960022 2889 log.go:181] (0xc0007ab6b0) (0xc00030b540) Create stream\nI1027 11:49:27.960041 2889 log.go:181] (0xc0007ab6b0) (0xc00030b540) Stream added, broadcasting: 3\nI1027 11:49:27.960907 2889 log.go:181] (0xc0007ab6b0) Reply frame received for 3\nI1027 11:49:27.960938 2889 log.go:181] (0xc0007ab6b0) (0xc000674000) Create stream\nI1027 11:49:27.960947 2889 log.go:181] (0xc0007ab6b0) (0xc000674000) Stream added, broadcasting: 5\nI1027 11:49:27.961724 2889 log.go:181] (0xc0007ab6b0) Reply frame received for 5\nI1027 11:49:28.055185 2889 log.go:181] (0xc0007ab6b0) Data frame received for 5\nI1027 11:49:28.055229 2889 log.go:181] (0xc000674000) (5) Data frame handling\nI1027 11:49:28.055252 2889 log.go:181] (0xc000674000) (5) Data frame sent\nI1027 11:49:28.055264 2889 log.go:181] (0xc0007ab6b0) Data frame received for 5\nI1027 11:49:28.055274 2889 log.go:181] (0xc000674000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1027 11:49:28.055302 2889 log.go:181] (0xc000674000) (5) Data frame sent\nI1027 11:49:28.055735 2889 log.go:181] (0xc0007ab6b0) Data frame received for 3\nI1027 11:49:28.055771 2889 log.go:181] (0xc00030b540) (3) Data frame handling\nI1027 11:49:28.055996 2889 log.go:181] (0xc0007ab6b0) Data frame received for 5\nI1027 11:49:28.056033 2889 log.go:181] (0xc000674000) (5) Data frame handling\nI1027 11:49:28.057695 2889 log.go:181] (0xc0007ab6b0) Data frame received for 1\nI1027 11:49:28.057736 2889 log.go:181] (0xc000674640) (1) Data frame handling\nI1027 11:49:28.057762 2889 log.go:181] (0xc000674640) (1) Data frame sent\nI1027 11:49:28.057785 2889 log.go:181] (0xc0007ab6b0) (0xc000674640) Stream removed, broadcasting: 1\nI1027 11:49:28.057826 2889 log.go:181] (0xc0007ab6b0) Go away received\nI1027 11:49:28.058316 2889 log.go:181] (0xc0007ab6b0) (0xc000674640) Stream removed, broadcasting: 1\nI1027 11:49:28.058353 2889 log.go:181] (0xc0007ab6b0) (0xc00030b540) Stream removed, broadcasting: 3\nI1027 11:49:28.058377 2889 log.go:181] (0xc0007ab6b0) (0xc000674000) Stream removed, broadcasting: 5\n" Oct 27 11:49:28.065: INFO: stdout: "" Oct 27 11:49:28.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpod-affinity6rrhd -- /bin/sh -x -c nc -zv -t -w 2 10.96.121.11 80' Oct 27 11:49:28.276: INFO: stderr: "I1027 11:49:28.203253 2907 log.go:181] (0xc0007b1550) (0xc0007a8a00) Create stream\nI1027 11:49:28.203313 2907 log.go:181] (0xc0007b1550) (0xc0007a8a00) Stream added, broadcasting: 1\nI1027 11:49:28.209114 2907 log.go:181] (0xc0007b1550) Reply frame received for 1\nI1027 11:49:28.209165 2907 log.go:181] (0xc0007b1550) (0xc000cde0a0) Create stream\nI1027 11:49:28.209180 2907 log.go:181] (0xc0007b1550) (0xc000cde0a0) Stream added, broadcasting: 3\nI1027 11:49:28.210313 2907 log.go:181] (0xc0007b1550) Reply frame received for 3\nI1027 11:49:28.210351 2907 log.go:181] (0xc0007b1550) (0xc0005bc1e0) Create stream\nI1027 11:49:28.210363 2907 log.go:181] (0xc0007b1550) (0xc0005bc1e0) Stream added, broadcasting: 5\nI1027 11:49:28.211185 2907 log.go:181] (0xc0007b1550) Reply frame received for 5\nI1027 11:49:28.270525 2907 log.go:181] (0xc0007b1550) Data frame received for 3\nI1027 11:49:28.270568 2907 log.go:181] (0xc000cde0a0) (3) Data frame handling\nI1027 11:49:28.270593 2907 log.go:181] (0xc0007b1550) Data frame received for 5\nI1027 11:49:28.270614 2907 log.go:181] (0xc0005bc1e0) (5) Data frame handling\nI1027 11:49:28.270626 2907 log.go:181] (0xc0005bc1e0) (5) Data frame sent\nI1027 11:49:28.270634 2907 log.go:181] (0xc0007b1550) Data frame received for 5\nI1027 11:49:28.270641 2907 log.go:181] (0xc0005bc1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.121.11 80\nConnection to 10.96.121.11 80 port [tcp/http] succeeded!\nI1027 11:49:28.270652 2907 log.go:181] (0xc0007b1550) Data frame received for 1\nI1027 11:49:28.270678 2907 log.go:181] (0xc0007a8a00) (1) Data frame handling\nI1027 11:49:28.270687 2907 log.go:181] (0xc0007a8a00) (1) Data frame sent\nI1027 11:49:28.270696 2907 log.go:181] (0xc0007b1550) (0xc0007a8a00) Stream removed, broadcasting: 1\nI1027 11:49:28.270721 2907 log.go:181] (0xc0007b1550) Go away received\nI1027 11:49:28.271121 2907 log.go:181] (0xc0007b1550) (0xc0007a8a00) Stream removed, broadcasting: 1\nI1027 11:49:28.271142 2907 log.go:181] (0xc0007b1550) (0xc000cde0a0) Stream removed, broadcasting: 3\nI1027 11:49:28.271149 2907 log.go:181] (0xc0007b1550) (0xc0005bc1e0) Stream removed, broadcasting: 5\n" Oct 27 11:49:28.276: INFO: stdout: "" Oct 27 11:49:28.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpod-affinity6rrhd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.121.11:80/ ; done' Oct 27 11:49:28.608: INFO: stderr: "I1027 11:49:28.419711 2925 log.go:181] (0xc00026e0b0) (0xc0005c61e0) Create stream\nI1027 11:49:28.419776 2925 log.go:181] (0xc00026e0b0) (0xc0005c61e0) Stream added, broadcasting: 1\nI1027 11:49:28.422092 2925 log.go:181] (0xc00026e0b0) Reply frame received for 1\nI1027 11:49:28.422141 2925 log.go:181] (0xc00026e0b0) (0xc000b303c0) Create stream\nI1027 11:49:28.422152 2925 log.go:181] (0xc00026e0b0) (0xc000b303c0) Stream added, broadcasting: 3\nI1027 11:49:28.423054 2925 log.go:181] (0xc00026e0b0) Reply frame received for 3\nI1027 11:49:28.423090 2925 log.go:181] (0xc00026e0b0) (0xc000fca280) Create stream\nI1027 11:49:28.423101 2925 log.go:181] (0xc00026e0b0) (0xc000fca280) Stream added, broadcasting: 5\nI1027 11:49:28.424164 2925 log.go:181] (0xc00026e0b0) Reply frame received for 5\nI1027 11:49:28.494490 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.494551 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.494592 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.494631 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.494660 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.494685 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.501466 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.501484 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.501494 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.502003 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.502027 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.502040 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.502059 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.502072 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.502084 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.510274 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.510289 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.510297 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.510586 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.510613 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.510646 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.510661 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.510673 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.510693 2925 log.go:181] (0xc000fca280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.510712 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.510732 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.510746 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.517929 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.517948 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.517962 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.518946 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.518962 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.518988 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.519017 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.519037 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.519055 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.524623 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.524645 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.524663 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.525644 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.525682 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.525693 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.525710 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.525719 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.525729 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.531993 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.532025 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.532053 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.532802 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.532820 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.532830 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.532970 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.532984 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.532998 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.539531 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.539550 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.539565 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.540226 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.540252 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.540264 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.540293 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.540328 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.540358 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.545905 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.545924 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.545936 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.546724 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.546752 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.546767 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.546789 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.546803 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.546820 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.554633 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.554660 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.554686 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.555390 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.555426 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.555441 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.555464 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.555478 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.555492 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.560230 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.560246 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.560258 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.560670 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.560683 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.560691 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.560735 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.560751 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.560764 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.563933 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.563956 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.563969 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.564347 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.564367 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.564384 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.564390 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.564400 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.564406 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.568601 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.568627 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.568648 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.568822 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.568903 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.568924 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.568933 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.568952 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.568959 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.568966 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.568972 2925 log.go:181] (0xc000fca280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.568985 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.572352 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.572367 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.572382 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.573096 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.573130 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.573149 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.573168 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.573177 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.573186 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.578567 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.578597 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.578628 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.578975 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.578997 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.579006 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.579021 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.579028 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.579035 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.579042 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.579057 2925 log.go:181] (0xc000fca280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.579085 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.584069 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.584100 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.584132 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.584564 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.584668 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.584695 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.584711 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.584723 2925 log.go:181] (0xc000fca280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.584746 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.584784 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.584801 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.584815 2925 log.go:181] (0xc000fca280) (5) Data frame sent\nI1027 11:49:28.590116 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.590143 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.590165 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.590799 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.590842 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.590853 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.590868 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.590875 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.590884 2925 log.go:181] (0xc000fca280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.597245 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.597264 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.597274 2925 log.go:181] (0xc000b303c0) (3) Data frame sent\nI1027 11:49:28.598080 2925 log.go:181] (0xc00026e0b0) Data frame received for 3\nI1027 11:49:28.598095 2925 log.go:181] (0xc000b303c0) (3) Data frame handling\nI1027 11:49:28.598337 2925 log.go:181] (0xc00026e0b0) Data frame received for 5\nI1027 11:49:28.598370 2925 log.go:181] (0xc000fca280) (5) Data frame handling\nI1027 11:49:28.602928 2925 log.go:181] (0xc00026e0b0) Data frame received for 1\nI1027 11:49:28.602959 2925 log.go:181] (0xc0005c61e0) (1) Data frame handling\nI1027 11:49:28.603012 2925 log.go:181] (0xc0005c61e0) (1) Data frame sent\nI1027 11:49:28.603034 2925 log.go:181] (0xc00026e0b0) (0xc0005c61e0) Stream removed, broadcasting: 1\nI1027 11:49:28.603056 2925 log.go:181] (0xc00026e0b0) Go away received\nI1027 11:49:28.603343 2925 log.go:181] (0xc00026e0b0) (0xc0005c61e0) Stream removed, broadcasting: 1\nI1027 11:49:28.603356 2925 log.go:181] (0xc00026e0b0) (0xc000b303c0) Stream removed, broadcasting: 3\nI1027 11:49:28.603361 2925 log.go:181] (0xc00026e0b0) (0xc000fca280) Stream removed, broadcasting: 5\n" Oct 27 11:49:28.609: INFO: stdout: "\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs\naffinity-clusterip-timeout-9npzs" Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Received response from host: affinity-clusterip-timeout-9npzs Oct 27 11:49:28.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpod-affinity6rrhd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.121.11:80/' Oct 27 11:49:28.837: INFO: stderr: "I1027 11:49:28.744917 2944 log.go:181] (0xc000d98fd0) (0xc0005a4000) Create stream\nI1027 11:49:28.744990 2944 log.go:181] (0xc000d98fd0) (0xc0005a4000) Stream added, broadcasting: 1\nI1027 11:49:28.749084 2944 log.go:181] (0xc000d98fd0) Reply frame received for 1\nI1027 11:49:28.749145 2944 log.go:181] (0xc000d98fd0) (0xc0005a4640) Create stream\nI1027 11:49:28.749167 2944 log.go:181] (0xc000d98fd0) (0xc0005a4640) Stream added, broadcasting: 3\nI1027 11:49:28.750136 2944 log.go:181] (0xc000d98fd0) Reply frame received for 3\nI1027 11:49:28.750185 2944 log.go:181] (0xc000d98fd0) (0xc000aa01e0) Create stream\nI1027 11:49:28.750199 2944 log.go:181] (0xc000d98fd0) (0xc000aa01e0) Stream added, broadcasting: 5\nI1027 11:49:28.751025 2944 log.go:181] (0xc000d98fd0) Reply frame received for 5\nI1027 11:49:28.823715 2944 log.go:181] (0xc000d98fd0) Data frame received for 5\nI1027 11:49:28.823738 2944 log.go:181] (0xc000aa01e0) (5) Data frame handling\nI1027 11:49:28.823750 2944 log.go:181] (0xc000aa01e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:28.828133 2944 log.go:181] (0xc000d98fd0) Data frame received for 3\nI1027 11:49:28.828166 2944 log.go:181] (0xc0005a4640) (3) Data frame handling\nI1027 11:49:28.828189 2944 log.go:181] (0xc0005a4640) (3) Data frame sent\nI1027 11:49:28.829012 2944 log.go:181] (0xc000d98fd0) Data frame received for 3\nI1027 11:49:28.829046 2944 log.go:181] (0xc0005a4640) (3) Data frame handling\nI1027 11:49:28.829070 2944 log.go:181] (0xc000d98fd0) Data frame received for 5\nI1027 11:49:28.829094 2944 log.go:181] (0xc000aa01e0) (5) Data frame handling\nI1027 11:49:28.830504 2944 log.go:181] (0xc000d98fd0) Data frame received for 1\nI1027 11:49:28.830519 2944 log.go:181] (0xc0005a4000) (1) Data frame handling\nI1027 11:49:28.830525 2944 log.go:181] (0xc0005a4000) (1) Data frame sent\nI1027 11:49:28.830532 2944 log.go:181] (0xc000d98fd0) (0xc0005a4000) Stream removed, broadcasting: 1\nI1027 11:49:28.830655 2944 log.go:181] (0xc000d98fd0) Go away received\nI1027 11:49:28.830788 2944 log.go:181] (0xc000d98fd0) (0xc0005a4000) Stream removed, broadcasting: 1\nI1027 11:49:28.830799 2944 log.go:181] (0xc000d98fd0) (0xc0005a4640) Stream removed, broadcasting: 3\nI1027 11:49:28.830804 2944 log.go:181] (0xc000d98fd0) (0xc000aa01e0) Stream removed, broadcasting: 5\n" Oct 27 11:49:28.837: INFO: stdout: "affinity-clusterip-timeout-9npzs" Oct 27 11:49:43.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3779 execpod-affinity6rrhd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.121.11:80/' Oct 27 11:49:44.102: INFO: stderr: "I1027 11:49:43.984108 2962 log.go:181] (0xc000a1b550) (0xc000738960) Create stream\nI1027 11:49:43.984150 2962 log.go:181] (0xc000a1b550) (0xc000738960) Stream added, broadcasting: 1\nI1027 11:49:43.988638 2962 log.go:181] (0xc000a1b550) Reply frame received for 1\nI1027 11:49:43.988667 2962 log.go:181] (0xc000a1b550) (0xc0009141e0) Create stream\nI1027 11:49:43.988676 2962 log.go:181] (0xc000a1b550) (0xc0009141e0) Stream added, broadcasting: 3\nI1027 11:49:43.989595 2962 log.go:181] (0xc000a1b550) Reply frame received for 3\nI1027 11:49:43.989626 2962 log.go:181] (0xc000a1b550) (0xc000506780) Create stream\nI1027 11:49:43.989636 2962 log.go:181] (0xc000a1b550) (0xc000506780) Stream added, broadcasting: 5\nI1027 11:49:43.990327 2962 log.go:181] (0xc000a1b550) Reply frame received for 5\nI1027 11:49:44.090934 2962 log.go:181] (0xc000a1b550) Data frame received for 5\nI1027 11:49:44.090967 2962 log.go:181] (0xc000506780) (5) Data frame handling\nI1027 11:49:44.090991 2962 log.go:181] (0xc000506780) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.121.11:80/\nI1027 11:49:44.094052 2962 log.go:181] (0xc000a1b550) Data frame received for 3\nI1027 11:49:44.094072 2962 log.go:181] (0xc0009141e0) (3) Data frame handling\nI1027 11:49:44.094090 2962 log.go:181] (0xc0009141e0) (3) Data frame sent\nI1027 11:49:44.094462 2962 log.go:181] (0xc000a1b550) Data frame received for 5\nI1027 11:49:44.094480 2962 log.go:181] (0xc000506780) (5) Data frame handling\nI1027 11:49:44.094770 2962 log.go:181] (0xc000a1b550) Data frame received for 3\nI1027 11:49:44.094787 2962 log.go:181] (0xc0009141e0) (3) Data frame handling\nI1027 11:49:44.096206 2962 log.go:181] (0xc000a1b550) Data frame received for 1\nI1027 11:49:44.096222 2962 log.go:181] (0xc000738960) (1) Data frame handling\nI1027 11:49:44.096234 2962 log.go:181] (0xc000738960) (1) Data frame sent\nI1027 11:49:44.096246 2962 log.go:181] (0xc000a1b550) (0xc000738960) Stream removed, broadcasting: 1\nI1027 11:49:44.096264 2962 log.go:181] (0xc000a1b550) Go away received\nI1027 11:49:44.096749 2962 log.go:181] (0xc000a1b550) (0xc000738960) Stream removed, broadcasting: 1\nI1027 11:49:44.096778 2962 log.go:181] (0xc000a1b550) (0xc0009141e0) Stream removed, broadcasting: 3\nI1027 11:49:44.096790 2962 log.go:181] (0xc000a1b550) (0xc000506780) Stream removed, broadcasting: 5\n" Oct 27 11:49:44.102: INFO: stdout: "affinity-clusterip-timeout-xvcsz" Oct 27 11:49:44.102: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3779, will wait for the garbage collector to delete the pods Oct 27 11:49:44.219: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.023456ms Oct 27 11:49:44.919: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 700.196459ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:49:58.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3779" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:48.569 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":248,"skipped":4020,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:49:58.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 27 11:50:06.990: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 27 11:50:07.143: INFO: Pod pod-with-prestop-exec-hook still exists Oct 27 11:50:09.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 27 11:50:09.147: INFO: Pod pod-with-prestop-exec-hook still exists Oct 27 11:50:11.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 27 11:50:11.149: INFO: Pod pod-with-prestop-exec-hook still exists Oct 27 11:50:13.143: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 27 11:50:13.148: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:50:13.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7564" for this suite. • [SLOW TEST:14.394 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4031,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:50:13.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-8dc58ee5-48cd-41fa-8a5b-f13b24541377 STEP: Creating a pod to test consume secrets Oct 27 11:50:13.274: INFO: Waiting up to 5m0s for pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270" in namespace "secrets-1321" to be "Succeeded or Failed" Oct 27 11:50:13.285: INFO: Pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270": Phase="Pending", Reason="", readiness=false. Elapsed: 10.645793ms Oct 27 11:50:15.288: INFO: Pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01399516s Oct 27 11:50:17.293: INFO: Pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018641575s Oct 27 11:50:19.297: INFO: Pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022959231s STEP: Saw pod success Oct 27 11:50:19.297: INFO: Pod "pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270" satisfied condition "Succeeded or Failed" Oct 27 11:50:19.301: INFO: Trying to get logs from node kali-worker pod pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270 container secret-volume-test: STEP: delete the pod Oct 27 11:50:19.343: INFO: Waiting for pod pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270 to disappear Oct 27 11:50:19.346: INFO: Pod pod-secrets-d29fb110-4837-4919-a873-e5a1eee52270 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:50:19.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1321" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":4041,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:50:19.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Oct 27 11:50:27.056: INFO: 10 pods remaining Oct 27 11:50:27.056: INFO: 0 pods has nil DeletionTimestamp Oct 27 11:50:27.056: INFO: Oct 27 11:50:28.359: INFO: 0 pods remaining Oct 27 11:50:28.359: INFO: 0 pods has nil DeletionTimestamp Oct 27 11:50:28.359: INFO: STEP: Gathering metrics W1027 11:50:29.335509 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:51:31.380: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:51:31.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4907" for this suite. • [SLOW TEST:72.035 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":251,"skipped":4054,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:51:31.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Oct 27 11:51:31.602: INFO: Waiting up to 5m0s for pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378" in namespace "containers-4484" to be "Succeeded or Failed" Oct 27 11:51:31.620: INFO: Pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378": Phase="Pending", Reason="", readiness=false. Elapsed: 18.352577ms Oct 27 11:51:33.646: INFO: Pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04438359s Oct 27 11:51:35.651: INFO: Pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048788596s Oct 27 11:51:37.656: INFO: Pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053623378s STEP: Saw pod success Oct 27 11:51:37.656: INFO: Pod "client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378" satisfied condition "Succeeded or Failed" Oct 27 11:51:37.659: INFO: Trying to get logs from node kali-worker pod client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378 container test-container: STEP: delete the pod Oct 27 11:51:37.701: INFO: Waiting for pod client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378 to disappear Oct 27 11:51:37.721: INFO: Pod client-containers-5acae096-d98c-49ce-acd5-2aa08fb58378 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:51:37.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4484" for this suite. • [SLOW TEST:6.339 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":252,"skipped":4054,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:51:37.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1027 11:51:49.739321 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:52:51.758: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 27 11:52:51.758: INFO: Deleting pod "simpletest-rc-to-be-deleted-72l44" in namespace "gc-7695" Oct 27 11:52:51.775: INFO: Deleting pod "simpletest-rc-to-be-deleted-csl5k" in namespace "gc-7695" Oct 27 11:52:51.846: INFO: Deleting pod "simpletest-rc-to-be-deleted-fsnj8" in namespace "gc-7695" Oct 27 11:52:51.923: INFO: Deleting pod "simpletest-rc-to-be-deleted-nj49w" in namespace "gc-7695" Oct 27 11:52:52.451: INFO: Deleting pod "simpletest-rc-to-be-deleted-pxhdm" in namespace "gc-7695" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:52:52.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7695" for this suite. • [SLOW TEST:75.017 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":253,"skipped":4056,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:52:52.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 27 11:52:53.197: INFO: >>> kubeConfig: /root/.kube/config Oct 27 11:52:56.178: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:53:07.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8567" for this suite. • [SLOW TEST:14.348 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":254,"skipped":4056,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:53:07.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:53:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8264" for this suite. • [SLOW TEST:16.154 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":255,"skipped":4068,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:53:23.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:53:23.318: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 27 11:53:25.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5462 create -f -' Oct 27 11:53:28.851: INFO: stderr: "" Oct 27 11:53:28.851: INFO: stdout: "e2e-test-crd-publish-openapi-4424-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 27 11:53:28.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5462 delete e2e-test-crd-publish-openapi-4424-crds test-cr' Oct 27 11:53:29.011: INFO: stderr: "" Oct 27 11:53:29.011: INFO: stdout: "e2e-test-crd-publish-openapi-4424-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 27 11:53:29.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5462 apply -f -' Oct 27 11:53:29.295: INFO: stderr: "" Oct 27 11:53:29.295: INFO: stdout: "e2e-test-crd-publish-openapi-4424-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 27 11:53:29.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5462 delete e2e-test-crd-publish-openapi-4424-crds test-cr' Oct 27 11:53:29.413: INFO: stderr: "" Oct 27 11:53:29.414: INFO: stdout: "e2e-test-crd-publish-openapi-4424-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 27 11:53:29.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4424-crds' Oct 27 11:53:29.689: INFO: stderr: "" Oct 27 11:53:29.689: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4424-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:53:31.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5462" for this suite. • [SLOW TEST:8.415 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":256,"skipped":4080,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:53:31.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1135 Oct 27 11:53:35.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 27 11:53:36.016: INFO: stderr: "I1027 11:53:35.906636 3070 log.go:181] (0xc000874f20) (0xc000489680) Create stream\nI1027 11:53:35.906703 3070 log.go:181] (0xc000874f20) (0xc000489680) Stream added, broadcasting: 1\nI1027 11:53:35.912539 3070 log.go:181] (0xc000874f20) Reply frame received for 1\nI1027 11:53:35.912598 3070 log.go:181] (0xc000874f20) (0xc000488640) Create stream\nI1027 11:53:35.912627 3070 log.go:181] (0xc000874f20) (0xc000488640) Stream added, broadcasting: 3\nI1027 11:53:35.913704 3070 log.go:181] (0xc000874f20) Reply frame received for 3\nI1027 11:53:35.913737 3070 log.go:181] (0xc000874f20) (0xc000489ea0) Create stream\nI1027 11:53:35.913753 3070 log.go:181] (0xc000874f20) (0xc000489ea0) Stream added, broadcasting: 5\nI1027 11:53:35.914577 3070 log.go:181] (0xc000874f20) Reply frame received for 5\nI1027 11:53:36.000301 3070 log.go:181] (0xc000874f20) Data frame received for 5\nI1027 11:53:36.000330 3070 log.go:181] (0xc000489ea0) (5) Data frame handling\nI1027 11:53:36.000346 3070 log.go:181] (0xc000489ea0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1027 11:53:36.005829 3070 log.go:181] (0xc000874f20) Data frame received for 3\nI1027 11:53:36.005862 3070 log.go:181] (0xc000488640) (3) Data frame handling\nI1027 11:53:36.005888 3070 log.go:181] (0xc000488640) (3) Data frame sent\nI1027 11:53:36.006295 3070 log.go:181] (0xc000874f20) Data frame received for 3\nI1027 11:53:36.006312 3070 log.go:181] (0xc000488640) (3) Data frame handling\nI1027 11:53:36.006351 3070 log.go:181] (0xc000874f20) Data frame received for 5\nI1027 11:53:36.006381 3070 log.go:181] (0xc000489ea0) (5) Data frame handling\nI1027 11:53:36.008006 3070 log.go:181] (0xc000874f20) Data frame received for 1\nI1027 11:53:36.008027 3070 log.go:181] (0xc000489680) (1) Data frame handling\nI1027 11:53:36.008047 3070 log.go:181] (0xc000489680) (1) Data frame sent\nI1027 11:53:36.008068 3070 log.go:181] (0xc000874f20) (0xc000489680) Stream removed, broadcasting: 1\nI1027 11:53:36.008090 3070 log.go:181] (0xc000874f20) Go away received\nI1027 11:53:36.008650 3070 log.go:181] (0xc000874f20) (0xc000489680) Stream removed, broadcasting: 1\nI1027 11:53:36.008687 3070 log.go:181] (0xc000874f20) (0xc000488640) Stream removed, broadcasting: 3\nI1027 11:53:36.008714 3070 log.go:181] (0xc000874f20) (0xc000489ea0) Stream removed, broadcasting: 5\n" Oct 27 11:53:36.016: INFO: stdout: "iptables" Oct 27 11:53:36.016: INFO: proxyMode: iptables Oct 27 11:53:36.026: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:53:36.048: INFO: Pod kube-proxy-mode-detector still exists Oct 27 11:53:38.048: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:53:38.063: INFO: Pod kube-proxy-mode-detector still exists Oct 27 11:53:40.048: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 27 11:53:40.061: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1135 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1135 I1027 11:53:40.128521 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1135, replica count: 3 I1027 11:53:43.178968 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1027 11:53:46.179219 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 27 11:53:46.190: INFO: Creating new exec pod Oct 27 11:53:51.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Oct 27 11:53:51.460: INFO: stderr: "I1027 11:53:51.350732 3088 log.go:181] (0xc00025d4a0) (0xc0005a4780) Create stream\nI1027 11:53:51.350789 3088 log.go:181] (0xc00025d4a0) (0xc0005a4780) Stream added, broadcasting: 1\nI1027 11:53:51.355423 3088 log.go:181] (0xc00025d4a0) Reply frame received for 1\nI1027 11:53:51.355479 3088 log.go:181] (0xc00025d4a0) (0xc0005a4000) Create stream\nI1027 11:53:51.355499 3088 log.go:181] (0xc00025d4a0) (0xc0005a4000) Stream added, broadcasting: 3\nI1027 11:53:51.356553 3088 log.go:181] (0xc00025d4a0) Reply frame received for 3\nI1027 11:53:51.356582 3088 log.go:181] (0xc00025d4a0) (0xc0005a40a0) Create stream\nI1027 11:53:51.356597 3088 log.go:181] (0xc00025d4a0) (0xc0005a40a0) Stream added, broadcasting: 5\nI1027 11:53:51.357563 3088 log.go:181] (0xc00025d4a0) Reply frame received for 5\nI1027 11:53:51.451688 3088 log.go:181] (0xc00025d4a0) Data frame received for 5\nI1027 11:53:51.451786 3088 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1027 11:53:51.451818 3088 log.go:181] (0xc0005a40a0) (5) Data frame sent\nI1027 11:53:51.451836 3088 log.go:181] (0xc00025d4a0) Data frame received for 5\nI1027 11:53:51.451844 3088 log.go:181] (0xc0005a40a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1027 11:53:51.451866 3088 log.go:181] (0xc0005a40a0) (5) Data frame sent\nI1027 11:53:51.451956 3088 log.go:181] (0xc00025d4a0) Data frame received for 5\nI1027 11:53:51.451970 3088 log.go:181] (0xc0005a40a0) (5) Data frame handling\nI1027 11:53:51.452538 3088 log.go:181] (0xc00025d4a0) Data frame received for 3\nI1027 11:53:51.452553 3088 log.go:181] (0xc0005a4000) (3) Data frame handling\nI1027 11:53:51.454284 3088 log.go:181] (0xc00025d4a0) Data frame received for 1\nI1027 11:53:51.454303 3088 log.go:181] (0xc0005a4780) (1) Data frame handling\nI1027 11:53:51.454319 3088 log.go:181] (0xc0005a4780) (1) Data frame sent\nI1027 11:53:51.454334 3088 log.go:181] (0xc00025d4a0) (0xc0005a4780) Stream removed, broadcasting: 1\nI1027 11:53:51.454452 3088 log.go:181] (0xc00025d4a0) Go away received\nI1027 11:53:51.454662 3088 log.go:181] (0xc00025d4a0) (0xc0005a4780) Stream removed, broadcasting: 1\nI1027 11:53:51.454686 3088 log.go:181] (0xc00025d4a0) (0xc0005a4000) Stream removed, broadcasting: 3\nI1027 11:53:51.454699 3088 log.go:181] (0xc00025d4a0) (0xc0005a40a0) Stream removed, broadcasting: 5\n" Oct 27 11:53:51.460: INFO: stdout: "" Oct 27 11:53:51.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c nc -zv -t -w 2 10.101.114.84 80' Oct 27 11:53:51.684: INFO: stderr: "I1027 11:53:51.605947 3106 log.go:181] (0xc000f0ef20) (0xc000f06500) Create stream\nI1027 11:53:51.606011 3106 log.go:181] (0xc000f0ef20) (0xc000f06500) Stream added, broadcasting: 1\nI1027 11:53:51.610310 3106 log.go:181] (0xc000f0ef20) Reply frame received for 1\nI1027 11:53:51.610348 3106 log.go:181] (0xc000f0ef20) (0xc000bea0a0) Create stream\nI1027 11:53:51.610371 3106 log.go:181] (0xc000f0ef20) (0xc000bea0a0) Stream added, broadcasting: 3\nI1027 11:53:51.611184 3106 log.go:181] (0xc000f0ef20) Reply frame received for 3\nI1027 11:53:51.611231 3106 log.go:181] (0xc000f0ef20) (0xc000f06000) Create stream\nI1027 11:53:51.611253 3106 log.go:181] (0xc000f0ef20) (0xc000f06000) Stream added, broadcasting: 5\nI1027 11:53:51.612068 3106 log.go:181] (0xc000f0ef20) Reply frame received for 5\nI1027 11:53:51.676234 3106 log.go:181] (0xc000f0ef20) Data frame received for 5\nI1027 11:53:51.676285 3106 log.go:181] (0xc000f06000) (5) Data frame handling\nI1027 11:53:51.676297 3106 log.go:181] (0xc000f06000) (5) Data frame sent\nI1027 11:53:51.676316 3106 log.go:181] (0xc000f0ef20) Data frame received for 5\nI1027 11:53:51.676331 3106 log.go:181] (0xc000f06000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.114.84 80\nConnection to 10.101.114.84 80 port [tcp/http] succeeded!\nI1027 11:53:51.676369 3106 log.go:181] (0xc000f0ef20) Data frame received for 3\nI1027 11:53:51.676397 3106 log.go:181] (0xc000bea0a0) (3) Data frame handling\nI1027 11:53:51.677940 3106 log.go:181] (0xc000f0ef20) Data frame received for 1\nI1027 11:53:51.677956 3106 log.go:181] (0xc000f06500) (1) Data frame handling\nI1027 11:53:51.677965 3106 log.go:181] (0xc000f06500) (1) Data frame sent\nI1027 11:53:51.677974 3106 log.go:181] (0xc000f0ef20) (0xc000f06500) Stream removed, broadcasting: 1\nI1027 11:53:51.678285 3106 log.go:181] (0xc000f0ef20) (0xc000f06500) Stream removed, broadcasting: 1\nI1027 11:53:51.678300 3106 log.go:181] (0xc000f0ef20) (0xc000bea0a0) Stream removed, broadcasting: 3\nI1027 11:53:51.678307 3106 log.go:181] (0xc000f0ef20) (0xc000f06000) Stream removed, broadcasting: 5\n" Oct 27 11:53:51.684: INFO: stdout: "" Oct 27 11:53:51.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31289' Oct 27 11:53:51.898: INFO: stderr: "I1027 11:53:51.819996 3125 log.go:181] (0xc0005defd0) (0xc000d28b40) Create stream\nI1027 11:53:51.820042 3125 log.go:181] (0xc0005defd0) (0xc000d28b40) Stream added, broadcasting: 1\nI1027 11:53:51.823477 3125 log.go:181] (0xc0005defd0) Reply frame received for 1\nI1027 11:53:51.823602 3125 log.go:181] (0xc0005defd0) (0xc0005403c0) Create stream\nI1027 11:53:51.823621 3125 log.go:181] (0xc0005defd0) (0xc0005403c0) Stream added, broadcasting: 3\nI1027 11:53:51.824764 3125 log.go:181] (0xc0005defd0) Reply frame received for 3\nI1027 11:53:51.824806 3125 log.go:181] (0xc0005defd0) (0xc000c903c0) Create stream\nI1027 11:53:51.824826 3125 log.go:181] (0xc0005defd0) (0xc000c903c0) Stream added, broadcasting: 5\nI1027 11:53:51.825854 3125 log.go:181] (0xc0005defd0) Reply frame received for 5\nI1027 11:53:51.890065 3125 log.go:181] (0xc0005defd0) Data frame received for 3\nI1027 11:53:51.890120 3125 log.go:181] (0xc0005403c0) (3) Data frame handling\nI1027 11:53:51.890253 3125 log.go:181] (0xc0005defd0) Data frame received for 5\nI1027 11:53:51.890274 3125 log.go:181] (0xc000c903c0) (5) Data frame handling\nI1027 11:53:51.890288 3125 log.go:181] (0xc000c903c0) (5) Data frame sent\nI1027 11:53:51.890299 3125 log.go:181] (0xc0005defd0) Data frame received for 5\nI1027 11:53:51.890309 3125 log.go:181] (0xc000c903c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31289\nConnection to 172.18.0.12 31289 port [tcp/31289] succeeded!\nI1027 11:53:51.891892 3125 log.go:181] (0xc0005defd0) Data frame received for 1\nI1027 11:53:51.891915 3125 log.go:181] (0xc000d28b40) (1) Data frame handling\nI1027 11:53:51.891936 3125 log.go:181] (0xc000d28b40) (1) Data frame sent\nI1027 11:53:51.891951 3125 log.go:181] (0xc0005defd0) (0xc000d28b40) Stream removed, broadcasting: 1\nI1027 11:53:51.891964 3125 log.go:181] (0xc0005defd0) Go away received\nI1027 11:53:51.892404 3125 log.go:181] (0xc0005defd0) (0xc000d28b40) Stream removed, broadcasting: 1\nI1027 11:53:51.892426 3125 log.go:181] (0xc0005defd0) (0xc0005403c0) Stream removed, broadcasting: 3\nI1027 11:53:51.892433 3125 log.go:181] (0xc0005defd0) (0xc000c903c0) Stream removed, broadcasting: 5\n" Oct 27 11:53:51.898: INFO: stdout: "" Oct 27 11:53:51.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31289' Oct 27 11:53:52.127: INFO: stderr: "I1027 11:53:52.035583 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4640) Create stream\nI1027 11:53:52.035633 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4640) Stream added, broadcasting: 1\nI1027 11:53:52.040579 3142 log.go:181] (0xc0007bd1e0) Reply frame received for 1\nI1027 11:53:52.040627 3142 log.go:181] (0xc0007bd1e0) (0xc000a20000) Create stream\nI1027 11:53:52.040642 3142 log.go:181] (0xc0007bd1e0) (0xc000a20000) Stream added, broadcasting: 3\nI1027 11:53:52.042586 3142 log.go:181] (0xc0007bd1e0) Reply frame received for 3\nI1027 11:53:52.042628 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4000) Create stream\nI1027 11:53:52.042639 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4000) Stream added, broadcasting: 5\nI1027 11:53:52.043786 3142 log.go:181] (0xc0007bd1e0) Reply frame received for 5\nI1027 11:53:52.119069 3142 log.go:181] (0xc0007bd1e0) Data frame received for 3\nI1027 11:53:52.119105 3142 log.go:181] (0xc000a20000) (3) Data frame handling\nI1027 11:53:52.119129 3142 log.go:181] (0xc0007bd1e0) Data frame received for 5\nI1027 11:53:52.119137 3142 log.go:181] (0xc0007b4000) (5) Data frame handling\nI1027 11:53:52.119150 3142 log.go:181] (0xc0007b4000) (5) Data frame sent\nI1027 11:53:52.119157 3142 log.go:181] (0xc0007bd1e0) Data frame received for 5\nI1027 11:53:52.119163 3142 log.go:181] (0xc0007b4000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31289\nConnection to 172.18.0.13 31289 port [tcp/31289] succeeded!\nI1027 11:53:52.120620 3142 log.go:181] (0xc0007bd1e0) Data frame received for 1\nI1027 11:53:52.120643 3142 log.go:181] (0xc0007b4640) (1) Data frame handling\nI1027 11:53:52.120664 3142 log.go:181] (0xc0007b4640) (1) Data frame sent\nI1027 11:53:52.120750 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4640) Stream removed, broadcasting: 1\nI1027 11:53:52.120792 3142 log.go:181] (0xc0007bd1e0) Go away received\nI1027 11:53:52.121237 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4640) Stream removed, broadcasting: 1\nI1027 11:53:52.121261 3142 log.go:181] (0xc0007bd1e0) (0xc000a20000) Stream removed, broadcasting: 3\nI1027 11:53:52.121268 3142 log.go:181] (0xc0007bd1e0) (0xc0007b4000) Stream removed, broadcasting: 5\n" Oct 27 11:53:52.127: INFO: stdout: "" Oct 27 11:53:52.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:31289/ ; done' Oct 27 11:53:52.452: INFO: stderr: "I1027 11:53:52.271044 3160 log.go:181] (0xc0003f71e0) (0xc000938a00) Create stream\nI1027 11:53:52.271103 3160 log.go:181] (0xc0003f71e0) (0xc000938a00) Stream added, broadcasting: 1\nI1027 11:53:52.276379 3160 log.go:181] (0xc0003f71e0) Reply frame received for 1\nI1027 11:53:52.276425 3160 log.go:181] (0xc0003f71e0) (0xc000938000) Create stream\nI1027 11:53:52.276437 3160 log.go:181] (0xc0003f71e0) (0xc000938000) Stream added, broadcasting: 3\nI1027 11:53:52.277641 3160 log.go:181] (0xc0003f71e0) Reply frame received for 3\nI1027 11:53:52.277688 3160 log.go:181] (0xc0003f71e0) (0xc000904be0) Create stream\nI1027 11:53:52.277711 3160 log.go:181] (0xc0003f71e0) (0xc000904be0) Stream added, broadcasting: 5\nI1027 11:53:52.278775 3160 log.go:181] (0xc0003f71e0) Reply frame received for 5\nI1027 11:53:52.343779 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.343814 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.343829 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.343869 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.343890 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.343908 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.347193 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.347232 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.347268 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.347960 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.347975 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.347986 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.348085 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.348116 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.348142 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.354644 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.354673 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.354688 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.355913 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.355928 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.355944 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.355961 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.355988 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.356016 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.361146 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.361164 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.361178 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.361709 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.361737 3160 log.go:181] (0xc000904be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.361759 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.361795 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.361816 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.361855 3160 log.go:181] (0xc000904be0) (5) Data frame sent\nI1027 11:53:52.368106 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.368151 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.368169 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.369126 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.369160 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.369182 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.369212 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.369235 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.369254 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.372970 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.372999 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.373023 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.373523 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.373544 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.373559 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.373573 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.373582 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.373602 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.377571 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.377604 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.377622 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.377759 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.377774 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.377781 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.377999 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.378017 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.378026 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.384186 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.384203 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.384216 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.384957 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.384982 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.384998 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.385036 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.385071 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.385090 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.391150 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.391170 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.391188 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.391919 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.391944 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.391955 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.391976 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.391984 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.391991 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.398968 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.399005 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.399043 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.399548 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.399575 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.399607 3160 log.go:181] (0xc000904be0) (5) Data frame sent\nI1027 11:53:52.399620 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.399632 3160 log.go:181] (0xc000904be0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.399664 3160 log.go:181] (0xc000904be0) (5) Data frame sent\nI1027 11:53:52.399697 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.399714 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.399726 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.404802 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.404820 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.404831 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.405508 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.405555 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.405573 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.405600 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.405611 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.405621 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.410773 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.410786 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.410801 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.411647 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.411658 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.411664 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.411695 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.411725 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.411752 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.416374 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.416387 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.416394 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.416903 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.416921 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.416928 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -sI1027 11:53:52.417047 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.417058 3160 log.go:181] (0xc000904be0) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.417075 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.417115 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.417143 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.417177 3160 log.go:181] (0xc000904be0) (5) Data frame sent\nI1027 11:53:52.422789 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.422808 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.422821 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.423530 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.423555 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.423565 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.423577 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.423586 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.423604 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.429780 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.429795 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.429805 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.430777 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.430792 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.430806 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.430833 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.430849 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.430871 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.434906 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.434932 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.434957 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.435829 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.435850 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.435877 3160 log.go:181] (0xc000904be0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.435895 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.435918 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.435940 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.442825 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.442845 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.442860 3160 log.go:181] (0xc000938000) (3) Data frame sent\nI1027 11:53:52.443633 3160 log.go:181] (0xc0003f71e0) Data frame received for 3\nI1027 11:53:52.443661 3160 log.go:181] (0xc000938000) (3) Data frame handling\nI1027 11:53:52.443681 3160 log.go:181] (0xc0003f71e0) Data frame received for 5\nI1027 11:53:52.443697 3160 log.go:181] (0xc000904be0) (5) Data frame handling\nI1027 11:53:52.445519 3160 log.go:181] (0xc0003f71e0) Data frame received for 1\nI1027 11:53:52.445561 3160 log.go:181] (0xc000938a00) (1) Data frame handling\nI1027 11:53:52.445592 3160 log.go:181] (0xc000938a00) (1) Data frame sent\nI1027 11:53:52.445616 3160 log.go:181] (0xc0003f71e0) (0xc000938a00) Stream removed, broadcasting: 1\nI1027 11:53:52.445633 3160 log.go:181] (0xc0003f71e0) Go away received\nI1027 11:53:52.445973 3160 log.go:181] (0xc0003f71e0) (0xc000938a00) Stream removed, broadcasting: 1\nI1027 11:53:52.445992 3160 log.go:181] (0xc0003f71e0) (0xc000938000) Stream removed, broadcasting: 3\nI1027 11:53:52.445999 3160 log.go:181] (0xc0003f71e0) (0xc000904be0) Stream removed, broadcasting: 5\n" Oct 27 11:53:52.453: INFO: stdout: "\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr\naffinity-nodeport-timeout-6sjkr" Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Received response from host: affinity-nodeport-timeout-6sjkr Oct 27 11:53:52.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:31289/' Oct 27 11:53:52.658: INFO: stderr: "I1027 11:53:52.591505 3178 log.go:181] (0xc0002280b0) (0xc000c960a0) Create stream\nI1027 11:53:52.591573 3178 log.go:181] (0xc0002280b0) (0xc000c960a0) Stream added, broadcasting: 1\nI1027 11:53:52.593305 3178 log.go:181] (0xc0002280b0) Reply frame received for 1\nI1027 11:53:52.593330 3178 log.go:181] (0xc0002280b0) (0xc000f463c0) Create stream\nI1027 11:53:52.593338 3178 log.go:181] (0xc0002280b0) (0xc000f463c0) Stream added, broadcasting: 3\nI1027 11:53:52.594154 3178 log.go:181] (0xc0002280b0) Reply frame received for 3\nI1027 11:53:52.594185 3178 log.go:181] (0xc0002280b0) (0xc000c96b40) Create stream\nI1027 11:53:52.594193 3178 log.go:181] (0xc0002280b0) (0xc000c96b40) Stream added, broadcasting: 5\nI1027 11:53:52.594963 3178 log.go:181] (0xc0002280b0) Reply frame received for 5\nI1027 11:53:52.647123 3178 log.go:181] (0xc0002280b0) Data frame received for 5\nI1027 11:53:52.647144 3178 log.go:181] (0xc000c96b40) (5) Data frame handling\nI1027 11:53:52.647154 3178 log.go:181] (0xc000c96b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:53:52.649323 3178 log.go:181] (0xc0002280b0) Data frame received for 3\nI1027 11:53:52.649353 3178 log.go:181] (0xc000f463c0) (3) Data frame handling\nI1027 11:53:52.649374 3178 log.go:181] (0xc000f463c0) (3) Data frame sent\nI1027 11:53:52.650528 3178 log.go:181] (0xc0002280b0) Data frame received for 3\nI1027 11:53:52.650566 3178 log.go:181] (0xc000f463c0) (3) Data frame handling\nI1027 11:53:52.650594 3178 log.go:181] (0xc0002280b0) Data frame received for 5\nI1027 11:53:52.650614 3178 log.go:181] (0xc000c96b40) (5) Data frame handling\nI1027 11:53:52.651891 3178 log.go:181] (0xc0002280b0) Data frame received for 1\nI1027 11:53:52.651904 3178 log.go:181] (0xc000c960a0) (1) Data frame handling\nI1027 11:53:52.651910 3178 log.go:181] (0xc000c960a0) (1) Data frame sent\nI1027 11:53:52.651917 3178 log.go:181] (0xc0002280b0) (0xc000c960a0) Stream removed, broadcasting: 1\nI1027 11:53:52.651933 3178 log.go:181] (0xc0002280b0) Go away received\nI1027 11:53:52.652222 3178 log.go:181] (0xc0002280b0) (0xc000c960a0) Stream removed, broadcasting: 1\nI1027 11:53:52.652235 3178 log.go:181] (0xc0002280b0) (0xc000f463c0) Stream removed, broadcasting: 3\nI1027 11:53:52.652241 3178 log.go:181] (0xc0002280b0) (0xc000c96b40) Stream removed, broadcasting: 5\n" Oct 27 11:53:52.658: INFO: stdout: "affinity-nodeport-timeout-6sjkr" Oct 27 11:54:07.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-1135 execpod-affinity5spzn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:31289/' Oct 27 11:54:07.902: INFO: stderr: "I1027 11:54:07.797262 3196 log.go:181] (0xc000018000) (0xc0005345a0) Create stream\nI1027 11:54:07.797332 3196 log.go:181] (0xc000018000) (0xc0005345a0) Stream added, broadcasting: 1\nI1027 11:54:07.798968 3196 log.go:181] (0xc000018000) Reply frame received for 1\nI1027 11:54:07.799011 3196 log.go:181] (0xc000018000) (0xc000eac000) Create stream\nI1027 11:54:07.799025 3196 log.go:181] (0xc000018000) (0xc000eac000) Stream added, broadcasting: 3\nI1027 11:54:07.799742 3196 log.go:181] (0xc000018000) Reply frame received for 3\nI1027 11:54:07.799780 3196 log.go:181] (0xc000018000) (0xc000eac0a0) Create stream\nI1027 11:54:07.799792 3196 log.go:181] (0xc000018000) (0xc000eac0a0) Stream added, broadcasting: 5\nI1027 11:54:07.800582 3196 log.go:181] (0xc000018000) Reply frame received for 5\nI1027 11:54:07.888715 3196 log.go:181] (0xc000018000) Data frame received for 5\nI1027 11:54:07.888747 3196 log.go:181] (0xc000eac0a0) (5) Data frame handling\nI1027 11:54:07.888765 3196 log.go:181] (0xc000eac0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31289/\nI1027 11:54:07.894359 3196 log.go:181] (0xc000018000) Data frame received for 3\nI1027 11:54:07.894383 3196 log.go:181] (0xc000eac000) (3) Data frame handling\nI1027 11:54:07.894393 3196 log.go:181] (0xc000eac000) (3) Data frame sent\nI1027 11:54:07.894838 3196 log.go:181] (0xc000018000) Data frame received for 5\nI1027 11:54:07.894855 3196 log.go:181] (0xc000eac0a0) (5) Data frame handling\nI1027 11:54:07.894888 3196 log.go:181] (0xc000018000) Data frame received for 3\nI1027 11:54:07.894917 3196 log.go:181] (0xc000eac000) (3) Data frame handling\nI1027 11:54:07.897008 3196 log.go:181] (0xc000018000) Data frame received for 1\nI1027 11:54:07.897044 3196 log.go:181] (0xc0005345a0) (1) Data frame handling\nI1027 11:54:07.897060 3196 log.go:181] (0xc0005345a0) (1) Data frame sent\nI1027 11:54:07.897073 3196 log.go:181] (0xc000018000) (0xc0005345a0) Stream removed, broadcasting: 1\nI1027 11:54:07.897086 3196 log.go:181] (0xc000018000) Go away received\nI1027 11:54:07.897466 3196 log.go:181] (0xc000018000) (0xc0005345a0) Stream removed, broadcasting: 1\nI1027 11:54:07.897485 3196 log.go:181] (0xc000018000) (0xc000eac000) Stream removed, broadcasting: 3\nI1027 11:54:07.897493 3196 log.go:181] (0xc000018000) (0xc000eac0a0) Stream removed, broadcasting: 5\n" Oct 27 11:54:07.902: INFO: stdout: "affinity-nodeport-timeout-srwhv" Oct 27 11:54:07.902: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1135, will wait for the garbage collector to delete the pods Oct 27 11:54:08.005: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.553034ms Oct 27 11:54:08.505: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.225681ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:54:18.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1135" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:47.133 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":257,"skipped":4082,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:54:18.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 27 11:54:18.901: INFO: Waiting up to 5m0s for pod "downward-api-96da895d-86e5-4014-b708-9e1b961a4192" in namespace "downward-api-4567" to be "Succeeded or Failed" Oct 27 11:54:18.942: INFO: Pod "downward-api-96da895d-86e5-4014-b708-9e1b961a4192": Phase="Pending", Reason="", readiness=false. Elapsed: 41.163602ms Oct 27 11:54:20.946: INFO: Pod "downward-api-96da895d-86e5-4014-b708-9e1b961a4192": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045250721s Oct 27 11:54:22.950: INFO: Pod "downward-api-96da895d-86e5-4014-b708-9e1b961a4192": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049387259s STEP: Saw pod success Oct 27 11:54:22.950: INFO: Pod "downward-api-96da895d-86e5-4014-b708-9e1b961a4192" satisfied condition "Succeeded or Failed" Oct 27 11:54:22.953: INFO: Trying to get logs from node kali-worker2 pod downward-api-96da895d-86e5-4014-b708-9e1b961a4192 container dapi-container: STEP: delete the pod Oct 27 11:54:22.999: INFO: Waiting for pod downward-api-96da895d-86e5-4014-b708-9e1b961a4192 to disappear Oct 27 11:54:23.008: INFO: Pod downward-api-96da895d-86e5-4014-b708-9e1b961a4192 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:54:23.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4567" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4089,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:54:23.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 27 11:54:29.647: INFO: Successfully updated pod "labelsupdate9a47fa4c-4cfb-4fc8-90a0-d283780683a1" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:54:31.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7537" for this suite. • [SLOW TEST:8.689 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4090,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:54:31.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-9df59d96-59e5-4ac9-84a4-cd5f172160b3 in namespace container-probe-2991 Oct 27 11:54:35.933: INFO: Started pod liveness-9df59d96-59e5-4ac9-84a4-cd5f172160b3 in namespace container-probe-2991 STEP: checking the pod's current state and verifying that restartCount is present Oct 27 11:54:35.936: INFO: Initial restart count of pod liveness-9df59d96-59e5-4ac9-84a4-cd5f172160b3 is 0 Oct 27 11:54:57.984: INFO: Restart count of pod container-probe-2991/liveness-9df59d96-59e5-4ac9-84a4-cd5f172160b3 is now 1 (22.048454606s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:54:58.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2991" for this suite. • [SLOW TEST:26.303 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4141,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:54:58.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-59d68cc7-a2af-4779-a0ec-192e5f431319 STEP: Creating a pod to test consume configMaps Oct 27 11:54:58.098: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f" in namespace "projected-3869" to be "Succeeded or Failed" Oct 27 11:54:58.118: INFO: Pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.543348ms Oct 27 11:55:00.122: INFO: Pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02475141s Oct 27 11:55:02.127: INFO: Pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f": Phase="Running", Reason="", readiness=true. Elapsed: 4.02904097s Oct 27 11:55:04.132: INFO: Pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034167268s STEP: Saw pod success Oct 27 11:55:04.132: INFO: Pod "pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f" satisfied condition "Succeeded or Failed" Oct 27 11:55:04.135: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f container projected-configmap-volume-test: STEP: delete the pod Oct 27 11:55:04.189: INFO: Waiting for pod pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f to disappear Oct 27 11:55:04.208: INFO: Pod pod-projected-configmaps-43b52d03-1261-4526-ac99-f7498cf59c9f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:55:04.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3869" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:55:04.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Oct 27 11:57:04.835: INFO: Successfully updated pod "var-expansion-c6db18a6-3161-43e2-b342-94a83143f0b1" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 27 11:57:06.843: INFO: Deleting pod "var-expansion-c6db18a6-3161-43e2-b342-94a83143f0b1" in namespace "var-expansion-1810" Oct 27 11:57:06.849: INFO: Wait up to 5m0s for pod "var-expansion-c6db18a6-3161-43e2-b342-94a83143f0b1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:57:40.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1810" for this suite. • [SLOW TEST:156.656 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":262,"skipped":4176,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:57:40.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7972 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 27 11:57:40.945: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 27 11:57:41.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:57:43.030: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:57:45.029: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 27 11:57:47.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:49.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:51.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:53.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:55.031: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:57.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:57:59.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:58:01.030: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 27 11:58:03.031: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 27 11:58:03.036: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 27 11:58:07.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.207:8080/dial?request=hostname&protocol=http&host=10.244.2.206&port=8080&tries=1'] Namespace:pod-network-test-7972 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:58:07.073: INFO: >>> kubeConfig: /root/.kube/config I1027 11:58:07.104765 7 log.go:181] (0xc000e1c580) (0xc00293b040) Create stream I1027 11:58:07.104790 7 log.go:181] (0xc000e1c580) (0xc00293b040) Stream added, broadcasting: 1 I1027 11:58:07.106724 7 log.go:181] (0xc000e1c580) Reply frame received for 1 I1027 11:58:07.106801 7 log.go:181] (0xc000e1c580) (0xc004229400) Create stream I1027 11:58:07.106830 7 log.go:181] (0xc000e1c580) (0xc004229400) Stream added, broadcasting: 3 I1027 11:58:07.107823 7 log.go:181] (0xc000e1c580) Reply frame received for 3 I1027 11:58:07.107877 7 log.go:181] (0xc000e1c580) (0xc0036f14a0) Create stream I1027 11:58:07.107895 7 log.go:181] (0xc000e1c580) (0xc0036f14a0) Stream added, broadcasting: 5 I1027 11:58:07.108747 7 log.go:181] (0xc000e1c580) Reply frame received for 5 I1027 11:58:07.203759 7 log.go:181] (0xc000e1c580) Data frame received for 3 I1027 11:58:07.203799 7 log.go:181] (0xc004229400) (3) Data frame handling I1027 11:58:07.203817 7 log.go:181] (0xc004229400) (3) Data frame sent I1027 11:58:07.205119 7 log.go:181] (0xc000e1c580) Data frame received for 3 I1027 11:58:07.205175 7 log.go:181] (0xc004229400) (3) Data frame handling I1027 11:58:07.205241 7 log.go:181] (0xc000e1c580) Data frame received for 5 I1027 11:58:07.205275 7 log.go:181] (0xc0036f14a0) (5) Data frame handling I1027 11:58:07.207194 7 log.go:181] (0xc000e1c580) Data frame received for 1 I1027 11:58:07.207234 7 log.go:181] (0xc00293b040) (1) Data frame handling I1027 11:58:07.207266 7 log.go:181] (0xc00293b040) (1) Data frame sent I1027 11:58:07.207284 7 log.go:181] (0xc000e1c580) (0xc00293b040) Stream removed, broadcasting: 1 I1027 11:58:07.207303 7 log.go:181] (0xc000e1c580) Go away received I1027 11:58:07.207449 7 log.go:181] (0xc000e1c580) (0xc00293b040) Stream removed, broadcasting: 1 I1027 11:58:07.207498 7 log.go:181] (0xc000e1c580) (0xc004229400) Stream removed, broadcasting: 3 I1027 11:58:07.207578 7 log.go:181] (0xc000e1c580) (0xc0036f14a0) Stream removed, broadcasting: 5 Oct 27 11:58:07.207: INFO: Waiting for responses: map[] Oct 27 11:58:07.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.207:8080/dial?request=hostname&protocol=http&host=10.244.1.153&port=8080&tries=1'] Namespace:pod-network-test-7972 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 27 11:58:07.211: INFO: >>> kubeConfig: /root/.kube/config I1027 11:58:07.240786 7 log.go:181] (0xc0005951e0) (0xc004229a40) Create stream I1027 11:58:07.240821 7 log.go:181] (0xc0005951e0) (0xc004229a40) Stream added, broadcasting: 1 I1027 11:58:07.242592 7 log.go:181] (0xc0005951e0) Reply frame received for 1 I1027 11:58:07.242623 7 log.go:181] (0xc0005951e0) (0xc00293b0e0) Create stream I1027 11:58:07.242631 7 log.go:181] (0xc0005951e0) (0xc00293b0e0) Stream added, broadcasting: 3 I1027 11:58:07.243447 7 log.go:181] (0xc0005951e0) Reply frame received for 3 I1027 11:58:07.243483 7 log.go:181] (0xc0005951e0) (0xc0043d6000) Create stream I1027 11:58:07.243495 7 log.go:181] (0xc0005951e0) (0xc0043d6000) Stream added, broadcasting: 5 I1027 11:58:07.244331 7 log.go:181] (0xc0005951e0) Reply frame received for 5 I1027 11:58:07.317960 7 log.go:181] (0xc0005951e0) Data frame received for 3 I1027 11:58:07.318001 7 log.go:181] (0xc00293b0e0) (3) Data frame handling I1027 11:58:07.318017 7 log.go:181] (0xc00293b0e0) (3) Data frame sent I1027 11:58:07.318768 7 log.go:181] (0xc0005951e0) Data frame received for 5 I1027 11:58:07.318789 7 log.go:181] (0xc0043d6000) (5) Data frame handling I1027 11:58:07.319589 7 log.go:181] (0xc0005951e0) Data frame received for 3 I1027 11:58:07.319663 7 log.go:181] (0xc00293b0e0) (3) Data frame handling I1027 11:58:07.325239 7 log.go:181] (0xc0005951e0) Data frame received for 1 I1027 11:58:07.325265 7 log.go:181] (0xc004229a40) (1) Data frame handling I1027 11:58:07.325285 7 log.go:181] (0xc004229a40) (1) Data frame sent I1027 11:58:07.325303 7 log.go:181] (0xc0005951e0) (0xc004229a40) Stream removed, broadcasting: 1 I1027 11:58:07.325396 7 log.go:181] (0xc0005951e0) (0xc004229a40) Stream removed, broadcasting: 1 I1027 11:58:07.325411 7 log.go:181] (0xc0005951e0) (0xc00293b0e0) Stream removed, broadcasting: 3 I1027 11:58:07.325422 7 log.go:181] (0xc0005951e0) (0xc0043d6000) Stream removed, broadcasting: 5 Oct 27 11:58:07.325: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:07.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I1027 11:58:07.325787 7 log.go:181] (0xc0005951e0) Go away received STEP: Destroying namespace "pod-network-test-7972" for this suite. • [SLOW TEST:26.459 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4187,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:07.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:07.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7688" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":264,"skipped":4193,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:07.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 11:58:08.321: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 11:58:10.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396688, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396688, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396688, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739396688, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 11:58:13.449: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:14.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1910" for this suite. STEP: Destroying namespace "webhook-1910-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.585 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":265,"skipped":4197,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:15.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 11:58:15.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7" in namespace "projected-286" to be "Succeeded or Failed" Oct 27 11:58:15.293: INFO: Pod "downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7": Phase="Pending", Reason="", readiness=false. Elapsed: 64.557261ms Oct 27 11:58:17.297: INFO: Pod "downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068859809s Oct 27 11:58:19.318: INFO: Pod "downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090069681s STEP: Saw pod success Oct 27 11:58:19.318: INFO: Pod "downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7" satisfied condition "Succeeded or Failed" Oct 27 11:58:19.321: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7 container client-container: STEP: delete the pod Oct 27 11:58:19.393: INFO: Waiting for pod downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7 to disappear Oct 27 11:58:19.398: INFO: Pod downwardapi-volume-c0290f0e-7ace-4286-86d8-bd7e673a74b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:19.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-286" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4204,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:19.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-a0a4ba37-4ba0-47c8-b028-0c2db0e8cde5 STEP: Creating configMap with name cm-test-opt-upd-99bca41f-c29b-469b-b302-3039d91e0ea4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a0a4ba37-4ba0-47c8-b028-0c2db0e8cde5 STEP: Updating configmap cm-test-opt-upd-99bca41f-c29b-469b-b302-3039d91e0ea4 STEP: Creating configMap with name cm-test-opt-create-1927e3a3-5682-42ee-aa4a-929cacb771c5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:29.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6990" for this suite. • [SLOW TEST:10.331 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4205,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:29.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:33.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7232" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:33.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:58:40.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7592" for this suite. • [SLOW TEST:7.120 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":269,"skipped":4259,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:58:41.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1027 11:58:42.112070 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 27 11:59:44.253: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:59:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5784" for this suite. • [SLOW TEST:63.267 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":270,"skipped":4263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:59:44.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 11:59:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 27 11:59:47.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 create -f -' Oct 27 11:59:53.323: INFO: stderr: "" Oct 27 11:59:53.323: INFO: stdout: "e2e-test-crd-publish-openapi-2213-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 27 11:59:53.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 delete e2e-test-crd-publish-openapi-2213-crds test-foo' Oct 27 11:59:53.430: INFO: stderr: "" Oct 27 11:59:53.430: INFO: stdout: "e2e-test-crd-publish-openapi-2213-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 27 11:59:53.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 apply -f -' Oct 27 11:59:53.704: INFO: stderr: "" Oct 27 11:59:53.704: INFO: stdout: "e2e-test-crd-publish-openapi-2213-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 27 11:59:53.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 delete e2e-test-crd-publish-openapi-2213-crds test-foo' Oct 27 11:59:53.833: INFO: stderr: "" Oct 27 11:59:53.833: INFO: stdout: "e2e-test-crd-publish-openapi-2213-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 27 11:59:53.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 create -f -' Oct 27 11:59:54.168: INFO: rc: 1 Oct 27 11:59:54.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 apply -f -' Oct 27 11:59:54.443: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 27 11:59:54.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 create -f -' Oct 27 11:59:54.710: INFO: rc: 1 Oct 27 11:59:54.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-423 apply -f -' Oct 27 11:59:55.020: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 27 11:59:55.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2213-crds' Oct 27 11:59:55.354: INFO: stderr: "" Oct 27 11:59:55.354: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2213-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 27 11:59:55.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2213-crds.metadata' Oct 27 11:59:55.657: INFO: stderr: "" Oct 27 11:59:55.657: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2213-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 27 11:59:55.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2213-crds.spec' Oct 27 11:59:55.961: INFO: stderr: "" Oct 27 11:59:55.961: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2213-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 27 11:59:55.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2213-crds.spec.bars' Oct 27 11:59:56.237: INFO: stderr: "" Oct 27 11:59:56.237: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2213-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 27 11:59:56.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2213-crds.spec.bars2' Oct 27 11:59:56.511: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 11:59:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-423" for this suite. • [SLOW TEST:15.181 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":271,"skipped":4305,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 11:59:59.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 27 11:59:59.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986719 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 11:59:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 11:59:59.546: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986719 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 11:59:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 27 12:00:09.557: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986753 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 12:00:09.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986753 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 27 12:00:19.567: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986783 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 12:00:19.567: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986783 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 27 12:00:29.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986813 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 12:00:29.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-a 4795ad16-65df-4e7f-9ddc-ca1769a89f0e 8986813 0 2020-10-27 11:59:59 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 27 12:00:39.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-b 00ff8e58-8e28-4140-8474-d7182ecc55ca 8986843 0 2020-10-27 12:00:39 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 12:00:39.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-b 00ff8e58-8e28-4140-8474-d7182ecc55ca 8986843 0 2020-10-27 12:00:39 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 27 12:00:49.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-b 00ff8e58-8e28-4140-8474-d7182ecc55ca 8986873 0 2020-10-27 12:00:39 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 27 12:00:49.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9585 /api/v1/namespaces/watch-9585/configmaps/e2e-watch-test-configmap-b 00ff8e58-8e28-4140-8474-d7182ecc55ca 8986873 0 2020-10-27 12:00:39 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-27 12:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:00:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9585" for this suite. • [SLOW TEST:60.145 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":272,"skipped":4309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:00:59.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:00:59.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 27 12:01:00.296: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:00Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:00Z]] name:name1 resourceVersion:8986914 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97ada48a-88ea-46bb-b3db-2e69cb359535] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 27 12:01:10.302: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:10Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:10Z]] name:name2 resourceVersion:8986948 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c4b5e46-7be0-41f0-bfaa-c8bbf22d5074] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 27 12:01:20.310: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:20Z]] name:name1 resourceVersion:8986978 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97ada48a-88ea-46bb-b3db-2e69cb359535] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 27 12:01:30.318: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:30Z]] name:name2 resourceVersion:8987008 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c4b5e46-7be0-41f0-bfaa-c8bbf22d5074] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 27 12:01:40.326: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:00Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:20Z]] name:name1 resourceVersion:8987038 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:97ada48a-88ea-46bb-b3db-2e69cb359535] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 27 12:01:50.336: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-27T12:01:10Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-27T12:01:30Z]] name:name2 resourceVersion:8987068 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4c4b5e46-7be0-41f0-bfaa-c8bbf22d5074] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:02:00.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1054" for this suite. • [SLOW TEST:61.258 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":273,"skipped":4340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:02:00.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-7616a543-e186-415d-a73f-9549cd5b82db STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:02:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6377" for this suite. • [SLOW TEST:6.141 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4418,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:02:07.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-vhpm STEP: Creating a pod to test atomic-volume-subpath Oct 27 12:02:07.063: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vhpm" in namespace "subpath-9407" to be "Succeeded or Failed" Oct 27 12:02:07.141: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Pending", Reason="", readiness=false. Elapsed: 77.440691ms Oct 27 12:02:09.146: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082408502s Oct 27 12:02:11.151: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 4.087496021s Oct 27 12:02:13.156: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 6.092224576s Oct 27 12:02:15.223: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 8.159312577s Oct 27 12:02:17.228: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 10.164541611s Oct 27 12:02:19.233: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 12.169023879s Oct 27 12:02:21.237: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 14.173856835s Oct 27 12:02:23.242: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 16.178858853s Oct 27 12:02:25.247: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 18.183513664s Oct 27 12:02:27.252: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 20.188347351s Oct 27 12:02:29.256: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Running", Reason="", readiness=true. Elapsed: 22.192701885s Oct 27 12:02:31.393: INFO: Pod "pod-subpath-test-configmap-vhpm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.329663593s STEP: Saw pod success Oct 27 12:02:31.393: INFO: Pod "pod-subpath-test-configmap-vhpm" satisfied condition "Succeeded or Failed" Oct 27 12:02:31.397: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-vhpm container test-container-subpath-configmap-vhpm: STEP: delete the pod Oct 27 12:02:31.455: INFO: Waiting for pod pod-subpath-test-configmap-vhpm to disappear Oct 27 12:02:31.469: INFO: Pod pod-subpath-test-configmap-vhpm no longer exists STEP: Deleting pod pod-subpath-test-configmap-vhpm Oct 27 12:02:31.469: INFO: Deleting pod "pod-subpath-test-configmap-vhpm" in namespace "subpath-9407" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:02:31.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9407" for this suite. • [SLOW TEST:24.476 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":275,"skipped":4426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:02:31.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:04:31.628: INFO: Deleting pod "var-expansion-20255e70-b983-46a8-a891-d0e64618071c" in namespace "var-expansion-9141" Oct 27 12:04:31.632: INFO: Wait up to 5m0s for pod "var-expansion-20255e70-b983-46a8-a891-d0e64618071c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:04:35.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9141" for this suite. • [SLOW TEST:124.176 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":276,"skipped":4449,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:04:35.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 12:04:36.166: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 12:04:38.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397076, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397076, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397076, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397076, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:04:41.294: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:04:41.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9984-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:04:42.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2960" for this suite. STEP: Destroying namespace "webhook-2960-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.869 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":277,"skipped":4455,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:04:42.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 12:04:43.400: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 12:04:45.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397083, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397083, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397083, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397083, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:04:48.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:04:49.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4298" for this suite. STEP: Destroying namespace "webhook-4298-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.641 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":278,"skipped":4456,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:04:49.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 27 12:04:49.312: INFO: Waiting up to 5m0s for pod "pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf" in namespace "emptydir-1629" to be "Succeeded or Failed" Oct 27 12:04:49.623: INFO: Pod "pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 310.803088ms Oct 27 12:04:51.667: INFO: Pod "pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354720297s Oct 27 12:04:53.672: INFO: Pod "pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.360203481s STEP: Saw pod success Oct 27 12:04:53.672: INFO: Pod "pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf" satisfied condition "Succeeded or Failed" Oct 27 12:04:53.676: INFO: Trying to get logs from node kali-worker pod pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf container test-container: STEP: delete the pod Oct 27 12:04:53.721: INFO: Waiting for pod pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf to disappear Oct 27 12:04:53.730: INFO: Pod pod-4675dbbc-bcb4-4ae0-9115-2e81e435e2bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:04:53.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1629" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:04:53.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-59c75989-3acc-4084-8e87-2fdb72c2d287 STEP: Creating a pod to test consume secrets Oct 27 12:04:53.810: INFO: Waiting up to 5m0s for pod "pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17" in namespace "secrets-5124" to be "Succeeded or Failed" Oct 27 12:04:53.832: INFO: Pod "pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17": Phase="Pending", Reason="", readiness=false. Elapsed: 21.044607ms Oct 27 12:04:55.835: INFO: Pod "pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024604805s Oct 27 12:04:57.839: INFO: Pod "pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028081081s STEP: Saw pod success Oct 27 12:04:57.839: INFO: Pod "pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17" satisfied condition "Succeeded or Failed" Oct 27 12:04:57.841: INFO: Trying to get logs from node kali-worker pod pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17 container secret-env-test: STEP: delete the pod Oct 27 12:04:57.876: INFO: Waiting for pod pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17 to disappear Oct 27 12:04:57.882: INFO: Pod pod-secrets-e06705d3-83c7-4cd8-94a9-00a0bec98e17 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:04:57.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5124" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4503,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:04:57.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Oct 27 12:04:57.963: INFO: Waiting up to 5m0s for pod "client-containers-155953fc-fe47-4516-a343-bbeab886aeab" in namespace "containers-6299" to be "Succeeded or Failed" Oct 27 12:04:57.966: INFO: Pod "client-containers-155953fc-fe47-4516-a343-bbeab886aeab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427834ms Oct 27 12:04:59.971: INFO: Pod "client-containers-155953fc-fe47-4516-a343-bbeab886aeab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008412035s Oct 27 12:05:01.976: INFO: Pod "client-containers-155953fc-fe47-4516-a343-bbeab886aeab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013054168s STEP: Saw pod success Oct 27 12:05:01.976: INFO: Pod "client-containers-155953fc-fe47-4516-a343-bbeab886aeab" satisfied condition "Succeeded or Failed" Oct 27 12:05:01.979: INFO: Trying to get logs from node kali-worker pod client-containers-155953fc-fe47-4516-a343-bbeab886aeab container test-container: STEP: delete the pod Oct 27 12:05:02.041: INFO: Waiting for pod client-containers-155953fc-fe47-4516-a343-bbeab886aeab to disappear Oct 27 12:05:02.044: INFO: Pod client-containers-155953fc-fe47-4516-a343-bbeab886aeab no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:02.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6299" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4514,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:02.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 12:05:02.699: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 12:05:04.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 12:05:06.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397102, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:05:09.805: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:05:09.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:10.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4724" for this suite. STEP: Destroying namespace "webhook-4724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.123 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":282,"skipped":4530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:11.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Oct 27 12:05:11.839: INFO: Waiting up to 5m0s for pod "var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5" in namespace "var-expansion-4425" to be "Succeeded or Failed" Oct 27 12:05:11.853: INFO: Pod "var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.829631ms Oct 27 12:05:13.857: INFO: Pod "var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017325792s Oct 27 12:05:15.865: INFO: Pod "var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025443423s STEP: Saw pod success Oct 27 12:05:15.865: INFO: Pod "var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5" satisfied condition "Succeeded or Failed" Oct 27 12:05:15.867: INFO: Trying to get logs from node kali-worker2 pod var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5 container dapi-container: STEP: delete the pod Oct 27 12:05:15.891: INFO: Waiting for pod var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5 to disappear Oct 27 12:05:15.920: INFO: Pod var-expansion-210e9b4c-5dcb-4a52-bfaf-cf1d1484d4f5 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:15.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4425" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":283,"skipped":4560,"failed":0} ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:15.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-c08ec6f5-70e5-4b55-a7cd-e11d63abe267 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c08ec6f5-70e5-4b55-a7cd-e11d63abe267 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:22.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7331" for this suite. • [SLOW TEST:6.225 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:22.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Oct 27 12:05:22.224: INFO: Waiting up to 5m0s for pod "var-expansion-da06431e-e679-456d-8278-9f48f68f143a" in namespace "var-expansion-8878" to be "Succeeded or Failed" Oct 27 12:05:22.275: INFO: Pod "var-expansion-da06431e-e679-456d-8278-9f48f68f143a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.877947ms Oct 27 12:05:24.279: INFO: Pod "var-expansion-da06431e-e679-456d-8278-9f48f68f143a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055166152s Oct 27 12:05:26.283: INFO: Pod "var-expansion-da06431e-e679-456d-8278-9f48f68f143a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059173174s STEP: Saw pod success Oct 27 12:05:26.284: INFO: Pod "var-expansion-da06431e-e679-456d-8278-9f48f68f143a" satisfied condition "Succeeded or Failed" Oct 27 12:05:26.286: INFO: Trying to get logs from node kali-worker2 pod var-expansion-da06431e-e679-456d-8278-9f48f68f143a container dapi-container: STEP: delete the pod Oct 27 12:05:26.333: INFO: Waiting for pod var-expansion-da06431e-e679-456d-8278-9f48f68f143a to disappear Oct 27 12:05:26.336: INFO: Pod var-expansion-da06431e-e679-456d-8278-9f48f68f143a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:26.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8878" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:26.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8511.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8511.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 27 12:05:32.465: INFO: DNS probes using dns-8511/dns-test-5828a363-1223-4bb6-bc05-67c9ba1da00e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:32.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8511" for this suite. • [SLOW TEST:6.202 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":286,"skipped":4627,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:32.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Oct 27 12:05:32.613: INFO: Waiting up to 5m0s for pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915" in namespace "containers-9591" to be "Succeeded or Failed" Oct 27 12:05:32.706: INFO: Pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915": Phase="Pending", Reason="", readiness=false. Elapsed: 93.387488ms Oct 27 12:05:34.711: INFO: Pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098084172s Oct 27 12:05:36.715: INFO: Pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915": Phase="Running", Reason="", readiness=true. Elapsed: 4.10252082s Oct 27 12:05:38.731: INFO: Pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11764988s STEP: Saw pod success Oct 27 12:05:38.731: INFO: Pod "client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915" satisfied condition "Succeeded or Failed" Oct 27 12:05:38.734: INFO: Trying to get logs from node kali-worker pod client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915 container test-container: STEP: delete the pod Oct 27 12:05:38.756: INFO: Waiting for pod client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915 to disappear Oct 27 12:05:38.813: INFO: Pod client-containers-a755c1a4-0be2-44dc-a2fc-16584750c915 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9591" for this suite. • [SLOW TEST:6.276 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4630,"failed":0} [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:38.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 27 12:05:39.092: INFO: Waiting up to 5m0s for pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7" in namespace "emptydir-4166" to be "Succeeded or Failed" Oct 27 12:05:39.109: INFO: Pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.050957ms Oct 27 12:05:41.113: INFO: Pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020625387s Oct 27 12:05:43.118: INFO: Pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.025600009s Oct 27 12:05:45.122: INFO: Pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029104472s STEP: Saw pod success Oct 27 12:05:45.122: INFO: Pod "pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7" satisfied condition "Succeeded or Failed" Oct 27 12:05:45.124: INFO: Trying to get logs from node kali-worker pod pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7 container test-container: STEP: delete the pod Oct 27 12:05:45.143: INFO: Waiting for pod pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7 to disappear Oct 27 12:05:45.160: INFO: Pod pod-10c020bb-4f87-4d8e-97f9-944bfe31d2b7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:05:45.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4166" for this suite. • [SLOW TEST:6.375 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:05:45.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:02.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2544" for this suite. • [SLOW TEST:17.127 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":289,"skipped":4653,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:02.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 27 12:06:02.913: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 27 12:06:05.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 27 12:06:07.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397163, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397162, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:06:10.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:10.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3569" for this suite. STEP: Destroying namespace "webhook-3569-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":290,"skipped":4670,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:10.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 12:06:10.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38" in namespace "downward-api-6021" to be "Succeeded or Failed" Oct 27 12:06:10.408: INFO: Pod "downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 27.22289ms Oct 27 12:06:12.438: INFO: Pod "downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056921904s Oct 27 12:06:14.762: INFO: Pod "downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.3810585s STEP: Saw pod success Oct 27 12:06:14.762: INFO: Pod "downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38" satisfied condition "Succeeded or Failed" Oct 27 12:06:14.818: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38 container client-container: STEP: delete the pod Oct 27 12:06:14.933: INFO: Waiting for pod downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38 to disappear Oct 27 12:06:14.935: INFO: Pod downwardapi-volume-210b6e0d-9801-4705-a4f0-9a0602d1ef38 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6021" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4683,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:14.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 27 12:06:15.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71" in namespace "projected-6310" to be "Succeeded or Failed" Oct 27 12:06:15.052: INFO: Pod "downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71": Phase="Pending", Reason="", readiness=false. Elapsed: 19.807489ms Oct 27 12:06:17.056: INFO: Pod "downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024358686s Oct 27 12:06:19.061: INFO: Pod "downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029248287s STEP: Saw pod success Oct 27 12:06:19.061: INFO: Pod "downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71" satisfied condition "Succeeded or Failed" Oct 27 12:06:19.063: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71 container client-container: STEP: delete the pod Oct 27 12:06:19.082: INFO: Waiting for pod downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71 to disappear Oct 27 12:06:19.086: INFO: Pod downwardapi-volume-56966df3-e1f2-4c0a-8ea4-8e19032c7c71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:19.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6310" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4685,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:19.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 27 12:06:20.649: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 27 12:06:22.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397180, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397180, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397180, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397180, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:06:25.692: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:06:25.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:26.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6927" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.788 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":293,"skipped":4694,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:26.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:06:26.994: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b8425120-7865-4d95-9097-52ef7f35a9e6" in namespace "security-context-test-3832" to be "Succeeded or Failed" Oct 27 12:06:27.025: INFO: Pod "busybox-readonly-false-b8425120-7865-4d95-9097-52ef7f35a9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.475485ms Oct 27 12:06:29.054: INFO: Pod "busybox-readonly-false-b8425120-7865-4d95-9097-52ef7f35a9e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059386507s Oct 27 12:06:31.058: INFO: Pod "busybox-readonly-false-b8425120-7865-4d95-9097-52ef7f35a9e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063968274s Oct 27 12:06:31.059: INFO: Pod "busybox-readonly-false-b8425120-7865-4d95-9097-52ef7f35a9e6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:06:31.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3832" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4706,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:06:31.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3753 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-3753 Oct 27 12:06:31.252: INFO: Found 0 stateful pods, waiting for 1 Oct 27 12:06:41.257: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 27 12:06:41.313: INFO: Deleting all statefulset in ns statefulset-3753 Oct 27 12:06:41.358: INFO: Scaling statefulset ss to 0 Oct 27 12:07:01.501: INFO: Waiting for statefulset status.replicas updated to 0 Oct 27 12:07:01.504: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:01.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3753" for this suite. • [SLOW TEST:30.448 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":295,"skipped":4709,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:01.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1079/configmap-test-219188a6-382b-440f-8bdb-432f0db117cd STEP: Creating a pod to test consume configMaps Oct 27 12:07:01.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b" in namespace "configmap-1079" to be "Succeeded or Failed" Oct 27 12:07:01.698: INFO: Pod "pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.282821ms Oct 27 12:07:03.703: INFO: Pod "pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031871796s Oct 27 12:07:05.708: INFO: Pod "pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036591663s STEP: Saw pod success Oct 27 12:07:05.708: INFO: Pod "pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b" satisfied condition "Succeeded or Failed" Oct 27 12:07:05.710: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b container env-test: STEP: delete the pod Oct 27 12:07:05.853: INFO: Waiting for pod pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b to disappear Oct 27 12:07:05.951: INFO: Pod pod-configmaps-52585383-7333-44d8-afe1-0423a72a693b no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:05.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1079" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":296,"skipped":4713,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:06.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:06.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9350" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":297,"skipped":4718,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:06.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 27 12:07:06.234: INFO: >>> kubeConfig: /root/.kube/config Oct 27 12:07:09.201: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:20.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2571" for this suite. • [SLOW TEST:13.887 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":298,"skipped":4734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:20.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 27 12:07:20.603: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 27 12:07:22.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397240, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397240, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397240, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63739397240, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 27 12:07:25.702: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 27 12:07:25.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:26.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8727" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.976 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":299,"skipped":4787,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:27.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-704efdde-8603-4c1d-8a96-d88b13c2c68f STEP: Creating a pod to test consume secrets Oct 27 12:07:27.169: INFO: Waiting up to 5m0s for pod "pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269" in namespace "secrets-1293" to be "Succeeded or Failed" Oct 27 12:07:27.176: INFO: Pod "pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269": Phase="Pending", Reason="", readiness=false. Elapsed: 7.571535ms Oct 27 12:07:29.181: INFO: Pod "pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011935487s Oct 27 12:07:31.185: INFO: Pod "pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016361536s STEP: Saw pod success Oct 27 12:07:31.185: INFO: Pod "pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269" satisfied condition "Succeeded or Failed" Oct 27 12:07:31.189: INFO: Trying to get logs from node kali-worker pod pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269 container secret-volume-test: STEP: delete the pod Oct 27 12:07:31.228: INFO: Waiting for pod pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269 to disappear Oct 27 12:07:31.236: INFO: Pod pod-secrets-6e25fc51-07f3-4f34-8bc0-117460546269 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:31.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1293" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4796,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:31.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 27 12:07:35.349: INFO: &Pod{ObjectMeta:{send-events-fe887c99-c0a9-4242-945c-f24b995203dd events-2411 /api/v1/namespaces/events-2411/pods/send-events-fe887c99-c0a9-4242-945c-f24b995203dd e807c704-2ae1-4b67-9e66-7c12a7c2bc9c 8989115 0 2020-10-27 12:07:31 +0000 UTC map[name:foo time:302386080] map[] [] [] [{e2e.test Update v1 2020-10-27 12:07:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-27 12:07:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.225\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mghpt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mghpt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mghpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 12:07:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 12:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 12:07:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-27 12:07:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.225,StartTime:2020-10-27 12:07:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-27 12:07:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://5834ab700f170c129c36ecce13dfee4c53b36ad5bcd2ae091ded705e3faa33aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 27 12:07:37.355: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 27 12:07:39.359: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:39.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2411" for this suite. • [SLOW TEST:8.157 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":301,"skipped":4807,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:39.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 27 12:07:39.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6174' Oct 27 12:07:39.560: INFO: stderr: "" Oct 27 12:07:39.560: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Oct 27 12:07:39.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6174' Oct 27 12:07:43.533: INFO: stderr: "" Oct 27 12:07:43.533: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:43.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6174" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":302,"skipped":4822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 27 12:07:43.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 27 12:07:54.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1060" for this suite. • [SLOW TEST:11.214 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":303,"skipped":4847,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 27 12:07:54.757: INFO: Running AfterSuite actions on all nodes Oct 27 12:07:54.757: INFO: Running AfterSuite actions on node 1 Oct 27 12:07:54.757: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 5787.291 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS