I0819 13:46:23.837505 10 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0819 13:46:23.843931 10 e2e.go:129] Starting e2e run "cc2da83f-3828-4aa2-8bb2-ad9bc28cd7a9" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597844769 - Will randomize all specs Will run 303 of 5237 specs Aug 19 13:46:24.428: INFO: >>> kubeConfig: /root/.kube/config Aug 19 13:46:24.493: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 19 13:46:25.019: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 19 13:46:25.351: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 19 13:46:25.351: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 19 13:46:25.352: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 19 13:46:25.410: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 19 13:46:25.410: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 19 13:46:25.411: INFO: e2e test version: v1.19.0-rc.4 Aug 19 13:46:25.415: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 19 13:46:25.416: INFO: >>> kubeConfig: /root/.kube/config Aug 19 13:46:25.444: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:46:25.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition Aug 19 13:46:27.772: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 13:46:27.782: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:46:44.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5472" for this suite. • [SLOW TEST:19.450 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":1,"skipped":27,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:46:44.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 13:46:48.673: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 13:46:51.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441609, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:46:53.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441609, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:46:55.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441609, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441608, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 13:46:58.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:46:59.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2181" for this suite. STEP: Destroying namespace "webhook-2181-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.549 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":2,"skipped":28,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:47:00.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 19 13:47:00.610: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:47:20.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3758" for this suite. • [SLOW TEST:19.605 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":38,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:47:20.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 19 13:47:20.699: INFO: Waiting up to 1m0s for all nodes to be ready Aug 19 13:48:20.778: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:48:20.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 19 13:48:27.924: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 13:48:56.796: INFO: pods created so far: [1 1 1] Aug 19 13:48:56.797: INFO: length of pods created so far: 3 Aug 19 13:49:18.968: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:49:25.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1939" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:49:26.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9774" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:126.332 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":4,"skipped":46,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:49:26.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4137 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4137;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4137 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4137;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4137.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4137.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4137.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4137.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4137.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4137.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.168_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4137 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4137;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4137 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4137;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4137.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4137.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4137.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4137.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4137.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4137.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4137.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4137.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4137.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 168.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.168_udp@PTR;check="$$(dig +tcp +noall +answer +search 168.238.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.238.168_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 13:49:45.794: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:45.869: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:46.630: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:46.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:47.030: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:47.339: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.433: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.457: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.694: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.698: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.763: INFO: Unable to read jessie_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:48.769: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:49.001: INFO: Unable to read jessie_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:49.007: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:49.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:49.893: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:50.633: INFO: Lookups using dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4137 wheezy_tcp@dns-test-service.dns-4137 wheezy_udp@dns-test-service.dns-4137.svc wheezy_tcp@dns-test-service.dns-4137.svc wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4137 jessie_tcp@dns-test-service.dns-4137 jessie_udp@dns-test-service.dns-4137.svc jessie_tcp@dns-test-service.dns-4137.svc jessie_udp@_http._tcp.dns-test-service.dns-4137.svc jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc] Aug 19 13:49:55.641: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.646: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.650: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.653: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.661: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.665: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.669: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.697: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.701: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.705: INFO: Unable to read jessie_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.711: INFO: Unable to read jessie_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.715: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.719: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.723: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:49:55.748: INFO: Lookups using dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4137 wheezy_tcp@dns-test-service.dns-4137 wheezy_udp@dns-test-service.dns-4137.svc wheezy_tcp@dns-test-service.dns-4137.svc wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4137 jessie_tcp@dns-test-service.dns-4137 jessie_udp@dns-test-service.dns-4137.svc jessie_tcp@dns-test-service.dns-4137.svc jessie_udp@_http._tcp.dns-test-service.dns-4137.svc jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc] Aug 19 13:50:00.639: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.643: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.647: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.651: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.657: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.662: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.665: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.668: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.692: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.696: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.700: INFO: Unable to read jessie_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.704: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.709: INFO: Unable to read jessie_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.712: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.720: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:00.745: INFO: Lookups using dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4137 wheezy_tcp@dns-test-service.dns-4137 wheezy_udp@dns-test-service.dns-4137.svc wheezy_tcp@dns-test-service.dns-4137.svc wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4137 jessie_tcp@dns-test-service.dns-4137 jessie_udp@dns-test-service.dns-4137.svc jessie_tcp@dns-test-service.dns-4137.svc jessie_udp@_http._tcp.dns-test-service.dns-4137.svc jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc] Aug 19 13:50:05.649: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.756: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.782: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.787: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.820: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.825: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.830: INFO: Unable to read jessie_udp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137 from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.839: INFO: Unable to read jessie_udp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.843: INFO: Unable to read jessie_tcp@dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.848: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.852: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc from pod dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a: the server could not find the requested resource (get pods dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a) Aug 19 13:50:05.876: INFO: Lookups using dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4137 wheezy_tcp@dns-test-service.dns-4137 wheezy_udp@dns-test-service.dns-4137.svc wheezy_tcp@dns-test-service.dns-4137.svc wheezy_udp@_http._tcp.dns-test-service.dns-4137.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4137.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4137 jessie_tcp@dns-test-service.dns-4137 jessie_udp@dns-test-service.dns-4137.svc jessie_tcp@dns-test-service.dns-4137.svc jessie_udp@_http._tcp.dns-test-service.dns-4137.svc jessie_tcp@_http._tcp.dns-test-service.dns-4137.svc] Aug 19 13:50:10.828: INFO: DNS probes using dns-4137/dns-test-bde728d5-5298-4ea7-b6e1-17bab74dec4a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:50:13.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4137" for this suite. • [SLOW TEST:47.869 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":5,"skipped":63,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:50:14.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:50:35.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-685" for this suite. • [SLOW TEST:21.328 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":6,"skipped":67,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:50:35.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 19 13:50:36.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config api-versions' Aug 19 13:50:39.431: INFO: stderr: "" Aug 19 13:50:39.431: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:50:39.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3599" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":7,"skipped":78,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:50:39.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-eff7831a-e03c-470e-b10e-0a4f704ff51c STEP: Creating a pod to test consume secrets Aug 19 13:50:41.619: INFO: Waiting up to 5m0s for pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759" in namespace "secrets-1449" to be "Succeeded or Failed" Aug 19 13:50:42.770: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 1.150544718s Aug 19 13:50:45.023: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 3.403808197s Aug 19 13:50:47.182: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 5.56326058s Aug 19 13:50:51.059: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 9.440126823s Aug 19 13:50:53.242: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 11.622701204s Aug 19 13:50:55.248: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 13.628780465s Aug 19 13:50:57.643: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023884914s Aug 19 13:50:59.650: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.030647142s STEP: Saw pod success Aug 19 13:50:59.650: INFO: Pod "pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759" satisfied condition "Succeeded or Failed" Aug 19 13:50:59.855: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759 container secret-volume-test: STEP: delete the pod Aug 19 13:51:00.338: INFO: Waiting for pod pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759 to disappear Aug 19 13:51:00.424: INFO: Pod pod-secrets-fc60243e-1a81-4291-a8fd-cdccd5c98759 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:51:00.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1449" for this suite. • [SLOW TEST:21.273 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":78,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:51:00.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-2884878e-5cba-4088-9a77-a9b775e23e99 STEP: Creating a pod to test consume secrets Aug 19 13:51:02.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947" in namespace "projected-9900" to be "Succeeded or Failed" Aug 19 13:51:02.976: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Pending", Reason="", readiness=false. Elapsed: 241.453958ms Aug 19 13:51:04.984: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249651222s Aug 19 13:51:07.273: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538905543s Aug 19 13:51:09.285: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550851226s Aug 19 13:51:11.955: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Running", Reason="", readiness=true. Elapsed: 9.221099732s Aug 19 13:51:14.255: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.521290203s STEP: Saw pod success Aug 19 13:51:14.256: INFO: Pod "pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947" satisfied condition "Succeeded or Failed" Aug 19 13:51:14.883: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947 container projected-secret-volume-test: STEP: delete the pod Aug 19 13:51:15.329: INFO: Waiting for pod pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947 to disappear Aug 19 13:51:15.495: INFO: Pod pod-projected-secrets-9eb450dd-01b9-41c9-a48c-e43a53981947 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:51:15.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9900" for this suite. • [SLOW TEST:14.750 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":81,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:51:15.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 19 13:51:22.689: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 19 13:51:25.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441883, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:27.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441883, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:29.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441883, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:31.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441883, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441882, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 13:51:34.151: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 13:51:34.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:51:35.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7749" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:20.928 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":10,"skipped":83,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:51:36.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-643d0bd1-487a-4d10-bdcb-0243aedfc707 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:51:37.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2800" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":11,"skipped":93,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:51:37.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 13:51:37.954: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 19 13:51:42.963: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 19 13:51:42.965: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 19 13:51:44.973: INFO: Creating deployment "test-rollover-deployment" Aug 19 13:51:45.086: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 19 13:51:47.101: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 19 13:51:47.112: INFO: Ensure that both replica sets have 1 created replica Aug 19 13:51:47.127: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 19 13:51:47.141: INFO: Updating deployment test-rollover-deployment Aug 19 13:51:47.141: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 19 13:51:49.211: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 19 13:51:49.313: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 19 13:51:49.328: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:49.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441907, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:51.343: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:51.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441907, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:53.480: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:53.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441911, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:55.349: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:55.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441911, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:57.658: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:57.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441911, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:51:59.758: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:51:59.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441911, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:52:01.625: INFO: all replica sets need to contain the pod-template-hash label Aug 19 13:52:01.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441911, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733441905, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 13:52:03.345: INFO: Aug 19 13:52:03.345: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 19 13:52:03.488: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1089 /apis/apps/v1/namespaces/deployment-1089/deployments/test-rollover-deployment d64c1f0c-acf0-4445-a658-db9712b2614e 1500233 2 2020-08-19 13:51:44 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-19 13:51:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 13:52:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003bd0d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-19 13:51:45 +0000 UTC,LastTransitionTime:2020-08-19 13:51:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-08-19 13:52:02 +0000 UTC,LastTransitionTime:2020-08-19 13:51:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 19 13:52:03.502: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-1089 /apis/apps/v1/namespaces/deployment-1089/replicasets/test-rollover-deployment-5797c7764 20ca29c5-1194-4905-aa4a-a275eaf2e3d3 1500218 2 2020-08-19 13:51:47 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d64c1f0c-acf0-4445-a658-db9712b2614e 0x4002a14440 0x4002a14441}] [] [{kube-controller-manager Update apps/v1 2020-08-19 13:52:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d64c1f0c-acf0-4445-a658-db9712b2614e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4002a144b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 19 13:52:03.502: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 19 13:52:03.503: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1089 /apis/apps/v1/namespaces/deployment-1089/replicasets/test-rollover-controller 4b7ea097-bcef-4844-bbee-90409ff74787 1500231 2 2020-08-19 13:51:37 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d64c1f0c-acf0-4445-a658-db9712b2614e 0x4002a1432f 0x4002a14340}] [] [{e2e.test Update apps/v1 2020-08-19 13:51:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 13:52:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d64c1f0c-acf0-4445-a658-db9712b2614e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4002a143d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 13:52:03.504: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1089 /apis/apps/v1/namespaces/deployment-1089/replicasets/test-rollover-deployment-78bc8b888c 55336a59-9c8e-4153-a0c4-7ca881254d70 1500150 2 2020-08-19 13:51:45 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d64c1f0c-acf0-4445-a658-db9712b2614e 0x4002a14527 0x4002a14528}] [] [{kube-controller-manager Update apps/v1 2020-08-19 13:51:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d64c1f0c-acf0-4445-a658-db9712b2614e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4002a145b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 13:52:03.537: INFO: Pod "test-rollover-deployment-5797c7764-vtmxw" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-vtmxw test-rollover-deployment-5797c7764- deployment-1089 /api/v1/namespaces/deployment-1089/pods/test-rollover-deployment-5797c7764-vtmxw 3e735c8b-8a42-48f6-881f-954bf59502f5 1500183 0 2020-08-19 13:51:47 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 20ca29c5-1194-4905-aa4a-a275eaf2e3d3 0x4002a14b20 0x4002a14b21}] [] [{kube-controller-manager Update v1 2020-08-19 13:51:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20ca29c5-1194-4905-aa4a-a275eaf2e3d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 13:51:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fsj68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fsj68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fsj68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:51:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:51:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.183,StartTime:2020-08-19 13:51:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 13:51:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://1c0998aca67afdf3c8debec73524b81cd502f068f12e69737aeaca9ef0834dbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:52:03.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1089" for this suite. • [SLOW TEST:26.145 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":12,"skipped":100,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:52:03.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-e359e4e2-f1c1-4702-91db-871a3b177db7 STEP: Creating a pod to test consume secrets Aug 19 13:52:04.058: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5" in namespace "projected-3975" to be "Succeeded or Failed" Aug 19 13:52:04.066: INFO: Pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.821037ms Aug 19 13:52:06.071: INFO: Pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013170025s Aug 19 13:52:08.159: INFO: Pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100870147s Aug 19 13:52:10.463: INFO: Pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.405221787s STEP: Saw pod success Aug 19 13:52:10.464: INFO: Pod "pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5" satisfied condition "Succeeded or Failed" Aug 19 13:52:10.686: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5 container projected-secret-volume-test: STEP: delete the pod Aug 19 13:52:10.903: INFO: Waiting for pod pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5 to disappear Aug 19 13:52:10.940: INFO: Pod pod-projected-secrets-8a5c9b6b-e09c-47bd-a98d-d86f990751c5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:52:10.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3975" for this suite. • [SLOW TEST:7.623 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":100,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:52:11.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-8581/secret-test-d3343445-95fb-4c2e-8e6d-0b9cd478d9b1 STEP: Creating a pod to test consume secrets Aug 19 13:52:11.948: INFO: Waiting up to 5m0s for pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284" in namespace "secrets-8581" to be "Succeeded or Failed" Aug 19 13:52:11.995: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Pending", Reason="", readiness=false. Elapsed: 46.165793ms Aug 19 13:52:14.031: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082602187s Aug 19 13:52:16.189: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240895864s Aug 19 13:52:18.213: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264151307s Aug 19 13:52:20.609: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66037063s Aug 19 13:52:23.003: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.054235993s STEP: Saw pod success Aug 19 13:52:23.003: INFO: Pod "pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284" satisfied condition "Succeeded or Failed" Aug 19 13:52:23.007: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284 container env-test: STEP: delete the pod Aug 19 13:52:23.513: INFO: Waiting for pod pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284 to disappear Aug 19 13:52:23.739: INFO: Pod pod-configmaps-5eebd769-57dd-47c2-b0ac-976c72cff284 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:52:23.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8581" for this suite. • [SLOW TEST:12.645 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":100,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:52:23.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5528 STEP: creating service affinity-nodeport in namespace services-5528 STEP: creating replication controller affinity-nodeport in namespace services-5528 I0819 13:52:24.893150 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5528, replica count: 3 I0819 13:52:27.947558 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 13:52:30.950013 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 13:52:33.950900 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 13:52:33.977: INFO: Creating new exec pod Aug 19 13:52:41.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5528 execpod-affinity4bnqz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 19 13:52:53.750: INFO: stderr: "I0819 13:52:53.630070 53 log.go:181] (0x4000f02bb0) (0x40006aec80) Create stream\nI0819 13:52:53.633892 53 log.go:181] (0x4000f02bb0) (0x40006aec80) Stream added, broadcasting: 1\nI0819 13:52:53.646640 53 log.go:181] (0x4000f02bb0) Reply frame received for 1\nI0819 13:52:53.647327 53 log.go:181] (0x4000f02bb0) (0x40006aed20) Create stream\nI0819 13:52:53.647396 53 log.go:181] (0x4000f02bb0) (0x40006aed20) Stream added, broadcasting: 3\nI0819 13:52:53.649955 53 log.go:181] (0x4000f02bb0) Reply frame received for 3\nI0819 13:52:53.650325 53 log.go:181] (0x4000f02bb0) (0x4000f84000) Create stream\nI0819 13:52:53.650407 53 log.go:181] (0x4000f02bb0) (0x4000f84000) Stream added, broadcasting: 5\nI0819 13:52:53.652583 53 log.go:181] (0x4000f02bb0) Reply frame received for 5\nI0819 13:52:53.730108 53 log.go:181] (0x4000f02bb0) Data frame received for 5\nI0819 13:52:53.730478 53 log.go:181] (0x4000f02bb0) Data frame received for 3\nI0819 13:52:53.730580 53 log.go:181] (0x40006aed20) (3) Data frame handling\nI0819 13:52:53.730706 53 log.go:181] (0x4000f84000) (5) Data frame handling\nI0819 13:52:53.730887 53 log.go:181] (0x4000f02bb0) Data frame received for 1\nI0819 13:52:53.730994 53 log.go:181] (0x40006aec80) (1) Data frame handling\nI0819 13:52:53.732288 53 log.go:181] (0x4000f84000) (5) Data frame sent\nI0819 13:52:53.732565 53 log.go:181] (0x4000f02bb0) Data frame received for 5\nI0819 13:52:53.732635 53 log.go:181] (0x4000f84000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0819 13:52:53.733206 53 log.go:181] (0x40006aec80) (1) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0819 13:52:53.735227 53 log.go:181] (0x4000f84000) (5) Data frame sent\nI0819 13:52:53.735336 53 log.go:181] (0x4000f02bb0) Data frame received for 5\nI0819 13:52:53.735921 53 log.go:181] (0x4000f02bb0) (0x40006aec80) Stream removed, broadcasting: 1\nI0819 13:52:53.736929 53 log.go:181] (0x4000f84000) (5) Data frame handling\nI0819 13:52:53.737703 53 log.go:181] (0x4000f02bb0) Go away received\nI0819 13:52:53.741634 53 log.go:181] (0x4000f02bb0) (0x40006aec80) Stream removed, broadcasting: 1\nI0819 13:52:53.741958 53 log.go:181] (0x4000f02bb0) (0x40006aed20) Stream removed, broadcasting: 3\nI0819 13:52:53.742182 53 log.go:181] (0x4000f02bb0) (0x4000f84000) Stream removed, broadcasting: 5\n" Aug 19 13:52:53.751: INFO: stdout: "" Aug 19 13:52:53.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5528 execpod-affinity4bnqz -- /bin/sh -x -c nc -zv -t -w 2 10.111.220.203 80' Aug 19 13:52:55.625: INFO: stderr: "I0819 13:52:55.524271 74 log.go:181] (0x4000fa4000) (0x400079c460) Create stream\nI0819 13:52:55.531662 74 log.go:181] (0x4000fa4000) (0x400079c460) Stream added, broadcasting: 1\nI0819 13:52:55.543932 74 log.go:181] (0x4000fa4000) Reply frame received for 1\nI0819 13:52:55.544680 74 log.go:181] (0x4000fa4000) (0x40008e8000) Create stream\nI0819 13:52:55.544922 74 log.go:181] (0x4000fa4000) (0x40008e8000) Stream added, broadcasting: 3\nI0819 13:52:55.546625 74 log.go:181] (0x4000fa4000) Reply frame received for 3\nI0819 13:52:55.547018 74 log.go:181] (0x4000fa4000) (0x4000d86000) Create stream\nI0819 13:52:55.547143 74 log.go:181] (0x4000fa4000) (0x4000d86000) Stream added, broadcasting: 5\nI0819 13:52:55.548885 74 log.go:181] (0x4000fa4000) Reply frame received for 5\nI0819 13:52:55.608282 74 log.go:181] (0x4000fa4000) Data frame received for 3\nI0819 13:52:55.608463 74 log.go:181] (0x4000fa4000) Data frame received for 5\nI0819 13:52:55.608618 74 log.go:181] (0x4000d86000) (5) Data frame handling\nI0819 13:52:55.608834 74 log.go:181] (0x40008e8000) (3) Data frame handling\nI0819 13:52:55.609131 74 log.go:181] (0x4000fa4000) Data frame received for 1\nI0819 13:52:55.609245 74 log.go:181] (0x400079c460) (1) Data frame handling\nI0819 13:52:55.610664 74 log.go:181] (0x400079c460) (1) Data frame sent\nI0819 13:52:55.610968 74 log.go:181] (0x4000d86000) (5) Data frame sent\nI0819 13:52:55.611082 74 log.go:181] (0x4000fa4000) Data frame received for 5\nI0819 13:52:55.611178 74 log.go:181] (0x4000d86000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.220.203 80\nConnection to 10.111.220.203 80 port [tcp/http] succeeded!\nI0819 13:52:55.612507 74 log.go:181] (0x4000fa4000) (0x400079c460) Stream removed, broadcasting: 1\nI0819 13:52:55.614171 74 log.go:181] (0x4000fa4000) Go away received\nI0819 13:52:55.617420 74 log.go:181] (0x4000fa4000) (0x400079c460) Stream removed, broadcasting: 1\nI0819 13:52:55.617645 74 log.go:181] (0x4000fa4000) (0x40008e8000) Stream removed, broadcasting: 3\nI0819 13:52:55.617812 74 log.go:181] (0x4000fa4000) (0x4000d86000) Stream removed, broadcasting: 5\n" Aug 19 13:52:55.626: INFO: stdout: "" Aug 19 13:52:55.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5528 execpod-affinity4bnqz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30289' Aug 19 13:52:57.216: INFO: stderr: "I0819 13:52:57.102058 94 log.go:181] (0x400074a000) (0x4000b7c000) Create stream\nI0819 13:52:57.107151 94 log.go:181] (0x400074a000) (0x4000b7c000) Stream added, broadcasting: 1\nI0819 13:52:57.119126 94 log.go:181] (0x400074a000) Reply frame received for 1\nI0819 13:52:57.119859 94 log.go:181] (0x400074a000) (0x40005c8000) Create stream\nI0819 13:52:57.119977 94 log.go:181] (0x400074a000) (0x40005c8000) Stream added, broadcasting: 3\nI0819 13:52:57.121348 94 log.go:181] (0x400074a000) Reply frame received for 3\nI0819 13:52:57.121572 94 log.go:181] (0x400074a000) (0x40005c80a0) Create stream\nI0819 13:52:57.121626 94 log.go:181] (0x400074a000) (0x40005c80a0) Stream added, broadcasting: 5\nI0819 13:52:57.122594 94 log.go:181] (0x400074a000) Reply frame received for 5\nI0819 13:52:57.191051 94 log.go:181] (0x400074a000) Data frame received for 5\nI0819 13:52:57.191324 94 log.go:181] (0x40005c80a0) (5) Data frame handling\nI0819 13:52:57.191542 94 log.go:181] (0x400074a000) Data frame received for 1\nI0819 13:52:57.191696 94 log.go:181] (0x4000b7c000) (1) Data frame handling\nI0819 13:52:57.191926 94 log.go:181] (0x400074a000) Data frame received for 3\nI0819 13:52:57.192059 94 log.go:181] (0x40005c8000) (3) Data frame handling\nI0819 13:52:57.192413 94 log.go:181] (0x40005c80a0) (5) Data frame sent\nI0819 13:52:57.192611 94 log.go:181] (0x4000b7c000) (1) Data frame sent\nI0819 13:52:57.192869 94 log.go:181] (0x400074a000) Data frame received for 5\nI0819 13:52:57.193006 94 log.go:181] (0x40005c80a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30289\nConnection to 172.18.0.11 30289 port [tcp/30289] succeeded!\nI0819 13:52:57.195553 94 log.go:181] (0x400074a000) (0x4000b7c000) Stream removed, broadcasting: 1\nI0819 13:52:57.198195 94 log.go:181] (0x400074a000) Go away received\nI0819 13:52:57.200343 94 log.go:181] (0x400074a000) (0x4000b7c000) Stream removed, broadcasting: 1\nI0819 13:52:57.201190 94 log.go:181] (0x400074a000) (0x40005c8000) Stream removed, broadcasting: 3\nI0819 13:52:57.201525 94 log.go:181] (0x400074a000) (0x40005c80a0) Stream removed, broadcasting: 5\n" Aug 19 13:52:57.217: INFO: stdout: "" Aug 19 13:52:57.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5528 execpod-affinity4bnqz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30289' Aug 19 13:52:59.033: INFO: stderr: "I0819 13:52:58.920984 114 log.go:181] (0x40006420b0) (0x4000c08280) Create stream\nI0819 13:52:58.924230 114 log.go:181] (0x40006420b0) (0x4000c08280) Stream added, broadcasting: 1\nI0819 13:52:58.933385 114 log.go:181] (0x40006420b0) Reply frame received for 1\nI0819 13:52:58.933960 114 log.go:181] (0x40006420b0) (0x4000f0c000) Create stream\nI0819 13:52:58.934017 114 log.go:181] (0x40006420b0) (0x4000f0c000) Stream added, broadcasting: 3\nI0819 13:52:58.935446 114 log.go:181] (0x40006420b0) Reply frame received for 3\nI0819 13:52:58.935829 114 log.go:181] (0x40006420b0) (0x4000a16000) Create stream\nI0819 13:52:58.935884 114 log.go:181] (0x40006420b0) (0x4000a16000) Stream added, broadcasting: 5\nI0819 13:52:58.937119 114 log.go:181] (0x40006420b0) Reply frame received for 5\nI0819 13:52:59.011436 114 log.go:181] (0x40006420b0) Data frame received for 5\nI0819 13:52:59.011806 114 log.go:181] (0x40006420b0) Data frame received for 1\nI0819 13:52:59.012107 114 log.go:181] (0x4000c08280) (1) Data frame handling\nI0819 13:52:59.012186 114 log.go:181] (0x4000a16000) (5) Data frame handling\nI0819 13:52:59.012399 114 log.go:181] (0x40006420b0) Data frame received for 3\nI0819 13:52:59.012529 114 log.go:181] (0x4000f0c000) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30289\nConnection to 172.18.0.14 30289 port [tcp/30289] succeeded!\nI0819 13:52:59.015394 114 log.go:181] (0x4000c08280) (1) Data frame sent\nI0819 13:52:59.015574 114 log.go:181] (0x4000a16000) (5) Data frame sent\nI0819 13:52:59.016082 114 log.go:181] (0x40006420b0) Data frame received for 5\nI0819 13:52:59.016135 114 log.go:181] (0x4000a16000) (5) Data frame handling\nI0819 13:52:59.017252 114 log.go:181] (0x40006420b0) (0x4000c08280) Stream removed, broadcasting: 1\nI0819 13:52:59.018042 114 log.go:181] (0x40006420b0) Go away received\nI0819 13:52:59.021061 114 log.go:181] (0x40006420b0) (0x4000c08280) Stream removed, broadcasting: 1\nI0819 13:52:59.021419 114 log.go:181] (0x40006420b0) (0x4000f0c000) Stream removed, broadcasting: 3\nI0819 13:52:59.021630 114 log.go:181] (0x40006420b0) (0x4000a16000) Stream removed, broadcasting: 5\n" Aug 19 13:52:59.034: INFO: stdout: "" Aug 19 13:52:59.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5528 execpod-affinity4bnqz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30289/ ; done' Aug 19 13:53:00.939: INFO: stderr: "I0819 13:53:00.704038 134 log.go:181] (0x400003a0b0) (0x400039a960) Create stream\nI0819 13:53:00.708086 134 log.go:181] (0x400003a0b0) (0x400039a960) Stream added, broadcasting: 1\nI0819 13:53:00.722630 134 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0819 13:53:00.723827 134 log.go:181] (0x400003a0b0) (0x4000e981e0) Create stream\nI0819 13:53:00.723950 134 log.go:181] (0x400003a0b0) (0x4000e981e0) Stream added, broadcasting: 3\nI0819 13:53:00.725953 134 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0819 13:53:00.726475 134 log.go:181] (0x400003a0b0) (0x4000e98280) Create stream\nI0819 13:53:00.726586 134 log.go:181] (0x400003a0b0) (0x4000e98280) Stream added, broadcasting: 5\nI0819 13:53:00.728246 134 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0819 13:53:00.822515 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.822926 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.823084 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.823193 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.823871 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.824181 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.826863 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.826952 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.827042 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.827985 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.828115 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.828251 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.828368 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.828448 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.828538 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.832375 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.832492 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.832602 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.833009 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.833080 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.833140 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.833200 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.833420 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.833483 134 log.go:181] (0x4000e981e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.838038 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.838139 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.838234 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.838629 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.838736 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.838886 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.839021 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.839231 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.839380 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.845000 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.845086 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.845191 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.845269 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.845350 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.845446 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.845591 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.845718 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.845857 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.850577 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.850732 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.850876 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.851393 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.851508 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.851629 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.851818 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.851975 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.852108 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.857558 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.857657 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.857756 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.858477 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.858610 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.858731 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.858855 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.858956 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.859061 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.862885 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.862972 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.863061 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.863494 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.863639 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.863743 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.863833 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.863921 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.864032 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.871236 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.871366 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.871506 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.872287 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.872410 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.872545 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.872660 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.873015 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\nI0819 13:53:00.873150 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.873267 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.873416 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.873547 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.877576 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.877650 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.877731 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.878608 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.878720 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.878790 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.878883 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.878959 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.879043 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.883594 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.883685 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.883805 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.884432 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.884535 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.884611 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.884679 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.884859 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.884959 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.888411 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.888482 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.888561 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.889472 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.889581 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.889675 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.889777 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.889861 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.889945 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.893233 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.893324 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.893429 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.894133 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.894239 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.894335 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.894418 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.894516 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.894618 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.898903 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.899011 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.899122 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.899698 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.899842 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.899940 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.900038 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.900128 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.900231 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.906910 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.907014 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.907123 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.907572 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.907727 134 log.go:181] (0x4000e98280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.907826 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.907936 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.908047 134 log.go:181] (0x4000e98280) (5) Data frame sent\nI0819 13:53:00.908138 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.911194 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.911316 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.911452 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.911928 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.912024 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.912092 134 log.go:181] (0x4000e98280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30289/\nI0819 13:53:00.912153 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.912216 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.912288 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.916301 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.916413 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.916534 134 log.go:181] (0x4000e981e0) (3) Data frame sent\nI0819 13:53:00.917470 134 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 13:53:00.917567 134 log.go:181] (0x4000e98280) (5) Data frame handling\nI0819 13:53:00.918443 134 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 13:53:00.918545 134 log.go:181] (0x4000e981e0) (3) Data frame handling\nI0819 13:53:00.919747 134 log.go:181] (0x400003a0b0) Data frame received for 1\nI0819 13:53:00.919847 134 log.go:181] (0x400039a960) (1) Data frame handling\nI0819 13:53:00.919951 134 log.go:181] (0x400039a960) (1) Data frame sent\nI0819 13:53:00.921049 134 log.go:181] (0x400003a0b0) (0x400039a960) Stream removed, broadcasting: 1\nI0819 13:53:00.924094 134 log.go:181] (0x400003a0b0) Go away received\nI0819 13:53:00.927938 134 log.go:181] (0x400003a0b0) (0x400039a960) Stream removed, broadcasting: 1\nI0819 13:53:00.928534 134 log.go:181] (0x400003a0b0) (0x4000e981e0) Stream removed, broadcasting: 3\nI0819 13:53:00.928998 134 log.go:181] (0x400003a0b0) (0x4000e98280) Stream removed, broadcasting: 5\n" Aug 19 13:53:00.945: INFO: stdout: "\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm\naffinity-nodeport-b77bm" Aug 19 13:53:00.946: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.946: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.946: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.946: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.947: INFO: Received response from host: affinity-nodeport-b77bm Aug 19 13:53:00.948: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-5528, will wait for the garbage collector to delete the pods Aug 19 13:53:01.347: INFO: Deleting ReplicationController affinity-nodeport took: 239.179383ms Aug 19 13:53:01.750: INFO: Terminating ReplicationController affinity-nodeport pods took: 402.142976ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:53:19.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5528" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:55.990 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":15,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:53:19.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 19 13:53:19.899: INFO: Waiting up to 5m0s for pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6" in namespace "var-expansion-5873" to be "Succeeded or Failed" Aug 19 13:53:19.934: INFO: Pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 35.636298ms Aug 19 13:53:22.077: INFO: Pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177875898s Aug 19 13:53:24.232: INFO: Pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33370428s Aug 19 13:53:26.609: INFO: Pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.710241423s STEP: Saw pod success Aug 19 13:53:26.609: INFO: Pod "var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6" satisfied condition "Succeeded or Failed" Aug 19 13:53:26.664: INFO: Trying to get logs from node latest-worker2 pod var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6 container dapi-container: STEP: delete the pod Aug 19 13:53:27.262: INFO: Waiting for pod var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6 to disappear Aug 19 13:53:27.297: INFO: Pod var-expansion-5946811f-dcd3-4da3-beb8-50151acb7dd6 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:53:27.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5873" for this suite. • [SLOW TEST:7.497 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":137,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:53:27.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 19 13:53:28.007: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 19 13:53:28.076: INFO: Waiting for terminating namespaces to be deleted... Aug 19 13:53:28.087: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 19 13:53:28.095: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 13:53:28.095: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 13:53:28.095: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 13:53:28.095: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 13:53:28.095: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 19 13:53:28.101: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 13:53:28.101: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 13:53:28.101: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 19 13:53:28.101: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8eb748d2-52da-4592-8cc3-9d3ee2a3fb52 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8eb748d2-52da-4592-8cc3-9d3ee2a3fb52 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8eb748d2-52da-4592-8cc3-9d3ee2a3fb52 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:53:49.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4839" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.910 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":17,"skipped":148,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:53:49.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 19 13:55:50.601: INFO: Successfully updated pod "var-expansion-911f7991-178b-4b01-b91f-85b3fae7a213" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 19 13:55:56.793: INFO: Deleting pod "var-expansion-911f7991-178b-4b01-b91f-85b3fae7a213" in namespace "var-expansion-1597" Aug 19 13:55:56.799: INFO: Wait up to 5m0s for pod "var-expansion-911f7991-178b-4b01-b91f-85b3fae7a213" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:56:31.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1597" for this suite. • [SLOW TEST:162.372 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":18,"skipped":153,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:56:31.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 19 13:56:32.140: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 19 13:56:32.295: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 19 13:56:32.296: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 19 13:56:32.463: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 19 13:56:32.464: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 19 13:56:32.661: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 19 13:56:32.662: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 19 13:56:41.822: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:56:42.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-4447" for this suite. • [SLOW TEST:12.082 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":19,"skipped":161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:56:43.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:56:45.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3565" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":186,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:56:45.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Aug 19 13:56:47.327: INFO: created test-podtemplate-1 Aug 19 13:56:47.626: INFO: created test-podtemplate-2 Aug 19 13:56:47.894: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Aug 19 13:56:47.985: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Aug 19 13:56:48.681: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:56:48.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-863" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":21,"skipped":197,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:56:48.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 19 13:56:48.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7063' Aug 19 13:56:52.984: INFO: stderr: "" Aug 19 13:56:52.984: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 19 13:56:52.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7063' Aug 19 13:56:54.732: INFO: stderr: "" Aug 19 13:56:54.732: INFO: stdout: "update-demo-nautilus-f86hl update-demo-nautilus-kvz7n " Aug 19 13:56:54.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f86hl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7063' Aug 19 13:56:56.204: INFO: stderr: "" Aug 19 13:56:56.204: INFO: stdout: "" Aug 19 13:56:56.204: INFO: update-demo-nautilus-f86hl is created but not running Aug 19 13:57:01.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7063' Aug 19 13:57:02.658: INFO: stderr: "" Aug 19 13:57:02.658: INFO: stdout: "update-demo-nautilus-f86hl update-demo-nautilus-kvz7n " Aug 19 13:57:02.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f86hl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7063' Aug 19 13:57:04.556: INFO: stderr: "" Aug 19 13:57:04.556: INFO: stdout: "true" Aug 19 13:57:04.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f86hl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7063' Aug 19 13:57:06.326: INFO: stderr: "" Aug 19 13:57:06.326: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 13:57:06.327: INFO: validating pod update-demo-nautilus-f86hl Aug 19 13:57:06.333: INFO: got data: { "image": "nautilus.jpg" } Aug 19 13:57:06.334: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 13:57:06.335: INFO: update-demo-nautilus-f86hl is verified up and running Aug 19 13:57:06.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvz7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7063' Aug 19 13:57:07.680: INFO: stderr: "" Aug 19 13:57:07.680: INFO: stdout: "true" Aug 19 13:57:07.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvz7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7063' Aug 19 13:57:09.911: INFO: stderr: "" Aug 19 13:57:09.911: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 13:57:09.911: INFO: validating pod update-demo-nautilus-kvz7n Aug 19 13:57:10.293: INFO: got data: { "image": "nautilus.jpg" } Aug 19 13:57:10.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 13:57:10.293: INFO: update-demo-nautilus-kvz7n is verified up and running STEP: using delete to clean up resources Aug 19 13:57:10.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7063' Aug 19 13:57:13.129: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 13:57:13.130: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 19 13:57:13.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7063' Aug 19 13:57:15.634: INFO: stderr: "No resources found in kubectl-7063 namespace.\n" Aug 19 13:57:15.634: INFO: stdout: "" Aug 19 13:57:15.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7063 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 19 13:57:17.255: INFO: stderr: "" Aug 19 13:57:17.255: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:57:17.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7063" for this suite. • [SLOW TEST:28.734 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":22,"skipped":207,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:57:17.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 19 13:57:24.645: INFO: 10 pods remaining Aug 19 13:57:24.645: INFO: 10 pods has nil DeletionTimestamp Aug 19 13:57:24.645: INFO: Aug 19 13:57:26.816: INFO: 10 pods remaining Aug 19 13:57:26.816: INFO: 3 pods has nil DeletionTimestamp Aug 19 13:57:26.816: INFO: Aug 19 13:57:30.133: INFO: 0 pods remaining Aug 19 13:57:30.133: INFO: 0 pods has nil DeletionTimestamp Aug 19 13:57:30.133: INFO: Aug 19 13:57:32.318: INFO: 0 pods remaining Aug 19 13:57:32.318: INFO: 0 pods has nil DeletionTimestamp Aug 19 13:57:32.318: INFO: Aug 19 13:57:33.223: INFO: 0 pods remaining Aug 19 13:57:33.223: INFO: 0 pods has nil DeletionTimestamp Aug 19 13:57:33.223: INFO: STEP: Gathering metrics W0819 13:57:36.212040 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 13:58:39.203: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:58:39.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7255" for this suite. • [SLOW TEST:81.773 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":23,"skipped":218,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:58:39.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 19 13:58:47.362: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7186 PodName:pod-sharedvolume-c9bfe742-b819-4bbc-b9ea-ae16005fc874 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 13:58:47.363: INFO: >>> kubeConfig: /root/.kube/config I0819 13:58:47.434353 10 log.go:181] (0x4000850160) (0x4001384500) Create stream I0819 13:58:47.435127 10 log.go:181] (0x4000850160) (0x4001384500) Stream added, broadcasting: 1 I0819 13:58:47.456000 10 log.go:181] (0x4000850160) Reply frame received for 1 I0819 13:58:47.457262 10 log.go:181] (0x4000850160) (0x4002749c20) Create stream I0819 13:58:47.457484 10 log.go:181] (0x4000850160) (0x4002749c20) Stream added, broadcasting: 3 I0819 13:58:47.459706 10 log.go:181] (0x4000850160) Reply frame received for 3 I0819 13:58:47.460010 10 log.go:181] (0x4000850160) (0x4002749cc0) Create stream I0819 13:58:47.460075 10 log.go:181] (0x4000850160) (0x4002749cc0) Stream added, broadcasting: 5 I0819 13:58:47.461653 10 log.go:181] (0x4000850160) Reply frame received for 5 I0819 13:58:47.544205 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 13:58:47.545049 10 log.go:181] (0x4002749c20) (3) Data frame handling I0819 13:58:47.545218 10 log.go:181] (0x4000850160) Data frame received for 1 I0819 13:58:47.545378 10 log.go:181] (0x4001384500) (1) Data frame handling I0819 13:58:47.545488 10 log.go:181] (0x4000850160) Data frame received for 5 I0819 13:58:47.545602 10 log.go:181] (0x4002749cc0) (5) Data frame handling I0819 13:58:47.546335 10 log.go:181] (0x4001384500) (1) Data frame sent I0819 13:58:47.546566 10 log.go:181] (0x4002749c20) (3) Data frame sent I0819 13:58:47.548671 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 13:58:47.548960 10 log.go:181] (0x4000850160) (0x4001384500) Stream removed, broadcasting: 1 I0819 13:58:47.549556 10 log.go:181] (0x4002749c20) (3) Data frame handling I0819 13:58:47.549867 10 log.go:181] (0x4000850160) Go away received I0819 13:58:47.552475 10 log.go:181] (0x4000850160) (0x4001384500) Stream removed, broadcasting: 1 I0819 13:58:47.552939 10 log.go:181] (0x4000850160) (0x4002749c20) Stream removed, broadcasting: 3 I0819 13:58:47.553219 10 log.go:181] (0x4000850160) (0x4002749cc0) Stream removed, broadcasting: 5 Aug 19 13:58:47.553: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:58:47.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7186" for this suite. • [SLOW TEST:8.350 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":24,"skipped":233,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:58:47.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 19 13:58:58.334: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:58:58.356: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:00.357: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:00.478: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:02.357: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:02.363: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:04.357: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:04.365: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:06.357: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:06.363: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:08.357: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:08.365: INFO: Pod pod-with-prestop-exec-hook still exists Aug 19 13:59:10.356: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 19 13:59:10.363: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:10.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4611" for this suite. • [SLOW TEST:22.837 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":234,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:10.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 13:59:10.567: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 19 13:59:15.615: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 19 13:59:15.616: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 19 13:59:19.709: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3980 /apis/apps/v1/namespaces/deployment-3980/deployments/test-cleanup-deployment 6927360c-48bb-44d4-944e-77792155df0a 1502518 1 2020-08-19 13:59:15 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-08-19 13:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 13:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40018c0f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-19 13:59:15 +0000 UTC,LastTransitionTime:2020-08-19 13:59:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-08-19 13:59:19 +0000 UTC,LastTransitionTime:2020-08-19 13:59:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 19 13:59:19.715: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-3980 /apis/apps/v1/namespaces/deployment-3980/replicasets/test-cleanup-deployment-5d446bdd47 d8a6d720-b045-444a-9fcd-3710bfe71312 1502507 1 2020-08-19 13:59:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6927360c-48bb-44d4-944e-77792155df0a 0x40018c1347 0x40018c1348}] [] [{kube-controller-manager Update apps/v1 2020-08-19 13:59:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6927360c-48bb-44d4-944e-77792155df0a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40018c13d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 19 13:59:19.721: INFO: Pod "test-cleanup-deployment-5d446bdd47-llzd7" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-llzd7 test-cleanup-deployment-5d446bdd47- deployment-3980 /api/v1/namespaces/deployment-3980/pods/test-cleanup-deployment-5d446bdd47-llzd7 4bb82818-8c28-4492-9e36-94c87451bdc7 1502506 0 2020-08-19 13:59:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 d8a6d720-b045-444a-9fcd-3710bfe71312 0x4002886177 0x4002886178}] [] [{kube-controller-manager Update v1 2020-08-19 13:59:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d8a6d720-b045-444a-9fcd-3710bfe71312\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 13:59:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.201\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gz9c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gz9c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gz9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:59:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:59:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:59:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 13:59:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.201,StartTime:2020-08-19 13:59:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 13:59:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://296898cd3211d6aed34744990a0bf0aa06e2f6ba3cfb42da27761132c677d7cf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:19.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3980" for this suite. • [SLOW TEST:9.324 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":26,"skipped":243,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:19.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 19 13:59:20.152: INFO: Waiting up to 5m0s for pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21" in namespace "emptydir-3174" to be "Succeeded or Failed" Aug 19 13:59:20.191: INFO: Pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21": Phase="Pending", Reason="", readiness=false. Elapsed: 37.958897ms Aug 19 13:59:22.198: INFO: Pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045412125s Aug 19 13:59:24.204: INFO: Pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21": Phase="Running", Reason="", readiness=true. Elapsed: 4.051698831s Aug 19 13:59:26.212: INFO: Pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059762015s STEP: Saw pod success Aug 19 13:59:26.213: INFO: Pod "pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21" satisfied condition "Succeeded or Failed" Aug 19 13:59:26.231: INFO: Trying to get logs from node latest-worker2 pod pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21 container test-container: STEP: delete the pod Aug 19 13:59:26.915: INFO: Waiting for pod pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21 to disappear Aug 19 13:59:26.930: INFO: Pod pod-f66566f8-8bd6-43e9-9228-1a3a667fdd21 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:26.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3174" for this suite. • [SLOW TEST:7.739 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":284,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:27.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 13:59:28.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6" in namespace "downward-api-1813" to be "Succeeded or Failed" Aug 19 13:59:28.489: INFO: Pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6": Phase="Pending", Reason="", readiness=false. Elapsed: 95.204732ms Aug 19 13:59:30.496: INFO: Pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1019258s Aug 19 13:59:32.503: INFO: Pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109772693s Aug 19 13:59:34.793: INFO: Pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.399460182s STEP: Saw pod success Aug 19 13:59:34.793: INFO: Pod "downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6" satisfied condition "Succeeded or Failed" Aug 19 13:59:34.868: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6 container client-container: STEP: delete the pod Aug 19 13:59:35.320: INFO: Waiting for pod downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6 to disappear Aug 19 13:59:35.500: INFO: Pod downwardapi-volume-339496d1-1d10-48c3-9580-c1ce1cb834f6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:35.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1813" for this suite. • [SLOW TEST:8.042 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":28,"skipped":284,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:35.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 19 13:59:40.327: INFO: starting watch STEP: patching STEP: updating Aug 19 13:59:40.343: INFO: waiting for watch events with expected annotations Aug 19 13:59:40.344: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:40.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-6221" for this suite. • [SLOW TEST:5.013 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":29,"skipped":297,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:40.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 13:59:48.801: INFO: DNS probes using dns-5424/dns-test-85b65813-b74b-47c0-a659-20f43adb51bc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:48.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5424" for this suite. • [SLOW TEST:8.623 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":30,"skipped":304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:49.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 19 13:59:49.640: INFO: Waiting up to 5m0s for pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871" in namespace "var-expansion-7476" to be "Succeeded or Failed" Aug 19 13:59:49.657: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871": Phase="Pending", Reason="", readiness=false. Elapsed: 16.842014ms Aug 19 13:59:51.728: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088553415s Aug 19 13:59:53.735: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09485348s Aug 19 13:59:55.752: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871": Phase="Running", Reason="", readiness=true. Elapsed: 6.112546664s Aug 19 13:59:57.760: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119663734s STEP: Saw pod success Aug 19 13:59:57.760: INFO: Pod "var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871" satisfied condition "Succeeded or Failed" Aug 19 13:59:57.765: INFO: Trying to get logs from node latest-worker pod var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871 container dapi-container: STEP: delete the pod Aug 19 13:59:58.137: INFO: Waiting for pod var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871 to disappear Aug 19 13:59:58.146: INFO: Pod var-expansion-a1195e03-eae5-4bbc-b5b3-cd8bb6c36871 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 13:59:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7476" for this suite. • [SLOW TEST:8.997 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":31,"skipped":332,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 13:59:58.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:00:09.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6556" for this suite. • [SLOW TEST:11.772 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":32,"skipped":332,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:00:09.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1987 Aug 19 14:00:14.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 19 14:00:16.078: INFO: stderr: "I0819 14:00:15.950418 376 log.go:181] (0x400012d340) (0x4000d02500) Create stream\nI0819 14:00:15.954449 376 log.go:181] (0x400012d340) (0x4000d02500) Stream added, broadcasting: 1\nI0819 14:00:15.962904 376 log.go:181] (0x400012d340) Reply frame received for 1\nI0819 14:00:15.963467 376 log.go:181] (0x400012d340) (0x4000afa000) Create stream\nI0819 14:00:15.963530 376 log.go:181] (0x400012d340) (0x4000afa000) Stream added, broadcasting: 3\nI0819 14:00:15.964967 376 log.go:181] (0x400012d340) Reply frame received for 3\nI0819 14:00:15.965223 376 log.go:181] (0x400012d340) (0x4000d025a0) Create stream\nI0819 14:00:15.965281 376 log.go:181] (0x400012d340) (0x4000d025a0) Stream added, broadcasting: 5\nI0819 14:00:15.966615 376 log.go:181] (0x400012d340) Reply frame received for 5\nI0819 14:00:16.055071 376 log.go:181] (0x400012d340) Data frame received for 5\nI0819 14:00:16.055448 376 log.go:181] (0x4000d025a0) (5) Data frame handling\nI0819 14:00:16.056269 376 log.go:181] (0x4000d025a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0819 14:00:16.057916 376 log.go:181] (0x400012d340) Data frame received for 3\nI0819 14:00:16.058007 376 log.go:181] (0x4000afa000) (3) Data frame handling\nI0819 14:00:16.058102 376 log.go:181] (0x4000afa000) (3) Data frame sent\nI0819 14:00:16.058812 376 log.go:181] (0x400012d340) Data frame received for 5\nI0819 14:00:16.058905 376 log.go:181] (0x4000d025a0) (5) Data frame handling\nI0819 14:00:16.059265 376 log.go:181] (0x400012d340) Data frame received for 3\nI0819 14:00:16.059354 376 log.go:181] (0x4000afa000) (3) Data frame handling\nI0819 14:00:16.061086 376 log.go:181] (0x400012d340) Data frame received for 1\nI0819 14:00:16.061154 376 log.go:181] (0x4000d02500) (1) Data frame handling\nI0819 14:00:16.061234 376 log.go:181] (0x4000d02500) (1) Data frame sent\nI0819 14:00:16.062311 376 log.go:181] (0x400012d340) (0x4000d02500) Stream removed, broadcasting: 1\nI0819 14:00:16.065519 376 log.go:181] (0x400012d340) Go away received\nI0819 14:00:16.067652 376 log.go:181] (0x400012d340) (0x4000d02500) Stream removed, broadcasting: 1\nI0819 14:00:16.067921 376 log.go:181] (0x400012d340) (0x4000afa000) Stream removed, broadcasting: 3\nI0819 14:00:16.068101 376 log.go:181] (0x400012d340) (0x4000d025a0) Stream removed, broadcasting: 5\n" Aug 19 14:00:16.079: INFO: stdout: "iptables" Aug 19 14:00:16.079: INFO: proxyMode: iptables Aug 19 14:00:16.087: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:16.127: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:18.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:18.136: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:20.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:20.135: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:22.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:22.135: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:24.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:24.134: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:26.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:26.135: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:28.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:28.135: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:00:30.128: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:00:30.133: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1987 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1987 I0819 14:00:30.272892 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1987, replica count: 3 I0819 14:00:33.324374 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:00:36.325311 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:00:39.326483 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 14:00:39.337: INFO: Creating new exec pod Aug 19 14:00:46.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityrzd6l -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 19 14:00:48.226: INFO: stderr: "I0819 14:00:48.099428 396 log.go:181] (0x4000e15290) (0x4000648640) Create stream\nI0819 14:00:48.101832 396 log.go:181] (0x4000e15290) (0x4000648640) Stream added, broadcasting: 1\nI0819 14:00:48.124163 396 log.go:181] (0x4000e15290) Reply frame received for 1\nI0819 14:00:48.124888 396 log.go:181] (0x4000e15290) (0x4000648000) Create stream\nI0819 14:00:48.124956 396 log.go:181] (0x4000e15290) (0x4000648000) Stream added, broadcasting: 3\nI0819 14:00:48.125992 396 log.go:181] (0x4000e15290) Reply frame received for 3\nI0819 14:00:48.126223 396 log.go:181] (0x4000e15290) (0x40006480a0) Create stream\nI0819 14:00:48.126292 396 log.go:181] (0x4000e15290) (0x40006480a0) Stream added, broadcasting: 5\nI0819 14:00:48.127285 396 log.go:181] (0x4000e15290) Reply frame received for 5\nI0819 14:00:48.205081 396 log.go:181] (0x4000e15290) Data frame received for 5\nI0819 14:00:48.205552 396 log.go:181] (0x40006480a0) (5) Data frame handling\nI0819 14:00:48.205687 396 log.go:181] (0x4000e15290) Data frame received for 3\nI0819 14:00:48.205804 396 log.go:181] (0x4000648000) (3) Data frame handling\nI0819 14:00:48.205951 396 log.go:181] (0x4000e15290) Data frame received for 1\nI0819 14:00:48.206084 396 log.go:181] (0x4000648640) (1) Data frame handling\nI0819 14:00:48.207293 396 log.go:181] (0x4000648640) (1) Data frame sent\nI0819 14:00:48.207418 396 log.go:181] (0x40006480a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0819 14:00:48.208026 396 log.go:181] (0x4000e15290) Data frame received for 5\nI0819 14:00:48.208101 396 log.go:181] (0x40006480a0) (5) Data frame handling\nI0819 14:00:48.210087 396 log.go:181] (0x4000e15290) (0x4000648640) Stream removed, broadcasting: 1\nI0819 14:00:48.210357 396 log.go:181] (0x40006480a0) (5) Data frame sent\nI0819 14:00:48.210455 396 log.go:181] (0x4000e15290) Data frame received for 5\nI0819 14:00:48.210516 396 log.go:181] (0x40006480a0) (5) Data frame handling\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0819 14:00:48.212919 396 log.go:181] (0x4000e15290) Go away received\nI0819 14:00:48.216199 396 log.go:181] (0x4000e15290) (0x4000648640) Stream removed, broadcasting: 1\nI0819 14:00:48.216578 396 log.go:181] (0x4000e15290) (0x4000648000) Stream removed, broadcasting: 3\nI0819 14:00:48.216982 396 log.go:181] (0x4000e15290) (0x40006480a0) Stream removed, broadcasting: 5\n" Aug 19 14:00:48.228: INFO: stdout: "" Aug 19 14:00:48.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityrzd6l -- /bin/sh -x -c nc -zv -t -w 2 10.104.126.192 80' Aug 19 14:00:49.882: INFO: stderr: "I0819 14:00:49.743106 416 log.go:181] (0x4000564790) (0x40001fc320) Create stream\nI0819 14:00:49.747497 416 log.go:181] (0x4000564790) (0x40001fc320) Stream added, broadcasting: 1\nI0819 14:00:49.790420 416 log.go:181] (0x4000564790) Reply frame received for 1\nI0819 14:00:49.792705 416 log.go:181] (0x4000564790) (0x400053c000) Create stream\nI0819 14:00:49.792958 416 log.go:181] (0x4000564790) (0x400053c000) Stream added, broadcasting: 3\nI0819 14:00:49.796795 416 log.go:181] (0x4000564790) Reply frame received for 3\nI0819 14:00:49.797237 416 log.go:181] (0x4000564790) (0x4000c6e000) Create stream\nI0819 14:00:49.797322 416 log.go:181] (0x4000564790) (0x4000c6e000) Stream added, broadcasting: 5\nI0819 14:00:49.798513 416 log.go:181] (0x4000564790) Reply frame received for 5\nI0819 14:00:49.857885 416 log.go:181] (0x4000564790) Data frame received for 3\nI0819 14:00:49.858569 416 log.go:181] (0x4000564790) Data frame received for 5\nI0819 14:00:49.858707 416 log.go:181] (0x4000c6e000) (5) Data frame handling\nI0819 14:00:49.858835 416 log.go:181] (0x400053c000) (3) Data frame handling\nI0819 14:00:49.859108 416 log.go:181] (0x4000564790) Data frame received for 1\nI0819 14:00:49.859241 416 log.go:181] (0x40001fc320) (1) Data frame handling\nI0819 14:00:49.860610 416 log.go:181] (0x4000c6e000) (5) Data frame sent\nI0819 14:00:49.861253 416 log.go:181] (0x40001fc320) (1) Data frame sent\nI0819 14:00:49.861462 416 log.go:181] (0x4000564790) Data frame received for 5\nI0819 14:00:49.861593 416 log.go:181] (0x4000c6e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.126.192 80\nConnection to 10.104.126.192 80 port [tcp/http] succeeded!\nI0819 14:00:49.862948 416 log.go:181] (0x4000564790) (0x40001fc320) Stream removed, broadcasting: 1\nI0819 14:00:49.866031 416 log.go:181] (0x4000564790) Go away received\nI0819 14:00:49.870196 416 log.go:181] (0x4000564790) (0x40001fc320) Stream removed, broadcasting: 1\nI0819 14:00:49.870563 416 log.go:181] (0x4000564790) (0x400053c000) Stream removed, broadcasting: 3\nI0819 14:00:49.870862 416 log.go:181] (0x4000564790) (0x4000c6e000) Stream removed, broadcasting: 5\n" Aug 19 14:00:49.883: INFO: stdout: "" Aug 19 14:00:49.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityrzd6l -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.126.192:80/ ; done' Aug 19 14:00:51.551: INFO: stderr: "I0819 14:00:51.355493 436 log.go:181] (0x4000c76000) (0x4000893d60) Create stream\nI0819 14:00:51.359788 436 log.go:181] (0x4000c76000) (0x4000893d60) Stream added, broadcasting: 1\nI0819 14:00:51.374432 436 log.go:181] (0x4000c76000) Reply frame received for 1\nI0819 14:00:51.375204 436 log.go:181] (0x4000c76000) (0x4000c461e0) Create stream\nI0819 14:00:51.375286 436 log.go:181] (0x4000c76000) (0x4000c461e0) Stream added, broadcasting: 3\nI0819 14:00:51.376790 436 log.go:181] (0x4000c76000) Reply frame received for 3\nI0819 14:00:51.376987 436 log.go:181] (0x4000c76000) (0x40004cb860) Create stream\nI0819 14:00:51.377039 436 log.go:181] (0x4000c76000) (0x40004cb860) Stream added, broadcasting: 5\nI0819 14:00:51.378044 436 log.go:181] (0x4000c76000) Reply frame received for 5\nI0819 14:00:51.447823 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.448238 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.448405 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.448596 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.449526 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.450933 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.451109 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.451225 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.451345 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.451475 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.451583 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.451721 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.451852 436 log.go:181] (0x40004cb860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.451966 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.452095 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.456140 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.456287 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.456437 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.457888 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.458012 436 log.go:181] (0x40004cb860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.458162 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.458330 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.458473 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.458623 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.463596 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.463693 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.463799 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.464159 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.464242 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.464311 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.464385 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.464442 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.464524 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.467369 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.467520 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.467668 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.467788 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.467899 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.468040 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.468166 436 log.go:181] (0x40004cb860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.468272 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.468389 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.470579 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.470687 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.470797 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.470910 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.471009 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.471119 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.471209 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.471307 436 log.go:181] (0x40004cb860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.471410 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.476049 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.476156 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.476272 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.476702 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.476936 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.477039 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.477155 436 log.go:181] (0x40004cb860) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.477238 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.477321 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.481167 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.481268 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.481382 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.481708 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.481785 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.481847 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.481905 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.481957 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.482021 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.482079 436 log.go:181] (0x4000c76000) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/I0819 14:00:51.482131 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.482398 436 log.go:181] (0x40004cb860) (5) Data frame sent\n\nI0819 14:00:51.486600 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.486740 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.486861 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.487409 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.487498 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.487572 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.487640 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.487721 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.487804 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.491236 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.491351 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.491502 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.495831 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.496013 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.496189 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.496333 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.496453 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.496558 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.501299 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.501438 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.501608 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.501768 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.501938 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.502088 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.509240 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.509606 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.509745 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.510164 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.510259 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.510357 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.510915 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.510995 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.511073 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.517617 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.517737 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.517828 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.518412 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.518497 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.518573 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.518668 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.518739 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.518844 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.525831 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.525952 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.526049 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.526785 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.526918 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.527062 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.527148 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.527252 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.527336 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.530974 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.531067 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.531149 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.531222 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.531280 436 log.go:181] (0x40004cb860) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.531348 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.531409 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.531455 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.531524 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.534628 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.534760 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.534851 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.534927 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.534991 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.535079 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.535156 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.535211 436 log.go:181] (0x40004cb860) (5) Data frame sent\nI0819 14:00:51.535262 436 log.go:181] (0x4000c461e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:51.537746 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.537808 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.537867 436 log.go:181] (0x4000c461e0) (3) Data frame sent\nI0819 14:00:51.538246 436 log.go:181] (0x4000c76000) Data frame received for 5\nI0819 14:00:51.538314 436 log.go:181] (0x40004cb860) (5) Data frame handling\nI0819 14:00:51.538411 436 log.go:181] (0x4000c76000) Data frame received for 3\nI0819 14:00:51.538485 436 log.go:181] (0x4000c461e0) (3) Data frame handling\nI0819 14:00:51.541561 436 log.go:181] (0x4000c76000) Data frame received for 1\nI0819 14:00:51.541661 436 log.go:181] (0x4000893d60) (1) Data frame handling\nI0819 14:00:51.541757 436 log.go:181] (0x4000893d60) (1) Data frame sent\nI0819 14:00:51.542783 436 log.go:181] (0x4000c76000) (0x4000893d60) Stream removed, broadcasting: 1\nI0819 14:00:51.543772 436 log.go:181] (0x4000c76000) Go away received\nI0819 14:00:51.545908 436 log.go:181] (0x4000c76000) (0x4000893d60) Stream removed, broadcasting: 1\nI0819 14:00:51.546138 436 log.go:181] (0x4000c76000) (0x4000c461e0) Stream removed, broadcasting: 3\nI0819 14:00:51.546300 436 log.go:181] (0x4000c76000) (0x40004cb860) Stream removed, broadcasting: 5\n" Aug 19 14:00:51.555: INFO: stdout: "\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm\naffinity-clusterip-timeout-rm2cm" Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.556: INFO: Received response from host: affinity-clusterip-timeout-rm2cm Aug 19 14:00:51.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityrzd6l -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.126.192:80/' Aug 19 14:00:53.165: INFO: stderr: "I0819 14:00:53.050145 456 log.go:181] (0x40005f4f20) (0x40006543c0) Create stream\nI0819 14:00:53.054753 456 log.go:181] (0x40005f4f20) (0x40006543c0) Stream added, broadcasting: 1\nI0819 14:00:53.067192 456 log.go:181] (0x40005f4f20) Reply frame received for 1\nI0819 14:00:53.068133 456 log.go:181] (0x40005f4f20) (0x4000b781e0) Create stream\nI0819 14:00:53.068250 456 log.go:181] (0x40005f4f20) (0x4000b781e0) Stream added, broadcasting: 3\nI0819 14:00:53.070325 456 log.go:181] (0x40005f4f20) Reply frame received for 3\nI0819 14:00:53.070524 456 log.go:181] (0x40005f4f20) (0x40005ec0a0) Create stream\nI0819 14:00:53.070585 456 log.go:181] (0x40005f4f20) (0x40005ec0a0) Stream added, broadcasting: 5\nI0819 14:00:53.071983 456 log.go:181] (0x40005f4f20) Reply frame received for 5\nI0819 14:00:53.141483 456 log.go:181] (0x40005f4f20) Data frame received for 5\nI0819 14:00:53.141688 456 log.go:181] (0x40005ec0a0) (5) Data frame handling\nI0819 14:00:53.142153 456 log.go:181] (0x40005ec0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:00:53.144435 456 log.go:181] (0x40005f4f20) Data frame received for 3\nI0819 14:00:53.144618 456 log.go:181] (0x4000b781e0) (3) Data frame handling\nI0819 14:00:53.144810 456 log.go:181] (0x40005f4f20) Data frame received for 5\nI0819 14:00:53.144909 456 log.go:181] (0x40005ec0a0) (5) Data frame handling\nI0819 14:00:53.145067 456 log.go:181] (0x4000b781e0) (3) Data frame sent\nI0819 14:00:53.145207 456 log.go:181] (0x40005f4f20) Data frame received for 3\nI0819 14:00:53.145344 456 log.go:181] (0x4000b781e0) (3) Data frame handling\nI0819 14:00:53.146691 456 log.go:181] (0x40005f4f20) Data frame received for 1\nI0819 14:00:53.146778 456 log.go:181] (0x40006543c0) (1) Data frame handling\nI0819 14:00:53.146873 456 log.go:181] (0x40006543c0) (1) Data frame sent\nI0819 14:00:53.147860 456 log.go:181] (0x40005f4f20) (0x40006543c0) Stream removed, broadcasting: 1\nI0819 14:00:53.150189 456 log.go:181] (0x40005f4f20) Go away received\nI0819 14:00:53.153854 456 log.go:181] (0x40005f4f20) (0x40006543c0) Stream removed, broadcasting: 1\nI0819 14:00:53.154517 456 log.go:181] (0x40005f4f20) (0x4000b781e0) Stream removed, broadcasting: 3\nI0819 14:00:53.154789 456 log.go:181] (0x40005f4f20) (0x40005ec0a0) Stream removed, broadcasting: 5\n" Aug 19 14:00:53.166: INFO: stdout: "affinity-clusterip-timeout-rm2cm" Aug 19 14:01:08.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityrzd6l -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.126.192:80/' Aug 19 14:01:09.834: INFO: stderr: "I0819 14:01:09.722951 476 log.go:181] (0x4000380e70) (0x400059e3c0) Create stream\nI0819 14:01:09.725372 476 log.go:181] (0x4000380e70) (0x400059e3c0) Stream added, broadcasting: 1\nI0819 14:01:09.734664 476 log.go:181] (0x4000380e70) Reply frame received for 1\nI0819 14:01:09.735462 476 log.go:181] (0x4000380e70) (0x4000aac000) Create stream\nI0819 14:01:09.735542 476 log.go:181] (0x4000380e70) (0x4000aac000) Stream added, broadcasting: 3\nI0819 14:01:09.736989 476 log.go:181] (0x4000380e70) Reply frame received for 3\nI0819 14:01:09.737281 476 log.go:181] (0x4000380e70) (0x40001ca000) Create stream\nI0819 14:01:09.737348 476 log.go:181] (0x4000380e70) (0x40001ca000) Stream added, broadcasting: 5\nI0819 14:01:09.738792 476 log.go:181] (0x4000380e70) Reply frame received for 5\nI0819 14:01:09.810448 476 log.go:181] (0x4000380e70) Data frame received for 5\nI0819 14:01:09.810825 476 log.go:181] (0x40001ca000) (5) Data frame handling\nI0819 14:01:09.811541 476 log.go:181] (0x4000380e70) Data frame received for 3\nI0819 14:01:09.811723 476 log.go:181] (0x4000aac000) (3) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.104.126.192:80/\nI0819 14:01:09.811896 476 log.go:181] (0x4000aac000) (3) Data frame sent\nI0819 14:01:09.812192 476 log.go:181] (0x40001ca000) (5) Data frame sent\nI0819 14:01:09.812476 476 log.go:181] (0x4000380e70) Data frame received for 3\nI0819 14:01:09.812606 476 log.go:181] (0x4000aac000) (3) Data frame handling\nI0819 14:01:09.812813 476 log.go:181] (0x4000380e70) Data frame received for 5\nI0819 14:01:09.812880 476 log.go:181] (0x40001ca000) (5) Data frame handling\nI0819 14:01:09.814222 476 log.go:181] (0x4000380e70) Data frame received for 1\nI0819 14:01:09.814292 476 log.go:181] (0x400059e3c0) (1) Data frame handling\nI0819 14:01:09.814365 476 log.go:181] (0x400059e3c0) (1) Data frame sent\nI0819 14:01:09.815061 476 log.go:181] (0x4000380e70) (0x400059e3c0) Stream removed, broadcasting: 1\nI0819 14:01:09.817838 476 log.go:181] (0x4000380e70) Go away received\nI0819 14:01:09.820581 476 log.go:181] (0x4000380e70) (0x400059e3c0) Stream removed, broadcasting: 1\nI0819 14:01:09.821137 476 log.go:181] (0x4000380e70) (0x4000aac000) Stream removed, broadcasting: 3\nI0819 14:01:09.821444 476 log.go:181] (0x4000380e70) (0x40001ca000) Stream removed, broadcasting: 5\n" Aug 19 14:01:09.835: INFO: stdout: "affinity-clusterip-timeout-24q8m" Aug 19 14:01:09.835: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1987, will wait for the garbage collector to delete the pods Aug 19 14:01:10.281: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 8.72519ms Aug 19 14:01:10.882: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.505367ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:01:18.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1987" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:68.494 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":33,"skipped":334,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:01:18.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0819 14:01:28.460211 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 14:02:31.343: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:02:31.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6165" for this suite. • [SLOW TEST:72.927 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":34,"skipped":350,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:02:31.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2311 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2311 to expose endpoints map[] Aug 19 14:02:33.656: INFO: successfully validated that service endpoint-test2 in namespace services-2311 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2311 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2311 to expose endpoints map[pod1:[80]] Aug 19 14:02:37.828: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]], will retry Aug 19 14:02:38.839: INFO: successfully validated that service endpoint-test2 in namespace services-2311 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2311 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2311 to expose endpoints map[pod1:[80] pod2:[80]] Aug 19 14:02:42.943: INFO: successfully validated that service endpoint-test2 in namespace services-2311 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2311 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2311 to expose endpoints map[pod2:[80]] Aug 19 14:02:43.012: INFO: successfully validated that service endpoint-test2 in namespace services-2311 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2311 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2311 to expose endpoints map[] Aug 19 14:02:43.086: INFO: successfully validated that service endpoint-test2 in namespace services-2311 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:02:43.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2311" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.517 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":35,"skipped":359,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:02:43.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-897d52dc-311c-42a7-95d6-108cf168477f STEP: Creating a pod to test consume configMaps Aug 19 14:02:44.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f" in namespace "configmap-4677" to be "Succeeded or Failed" Aug 19 14:02:44.154: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328727ms Aug 19 14:02:46.210: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060298854s Aug 19 14:02:48.799: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648362997s Aug 19 14:02:50.937: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.786429977s Aug 19 14:02:53.296: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.145625412s Aug 19 14:02:55.465: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.314592101s STEP: Saw pod success Aug 19 14:02:55.465: INFO: Pod "pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f" satisfied condition "Succeeded or Failed" Aug 19 14:02:55.470: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f container configmap-volume-test: STEP: delete the pod Aug 19 14:02:56.309: INFO: Waiting for pod pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f to disappear Aug 19 14:02:56.756: INFO: Pod pod-configmaps-2628952c-fe99-4d2c-9362-00242696849f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:02:56.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4677" for this suite. • [SLOW TEST:13.494 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":363,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:02:57.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-aa187be1-6a2d-4cab-881e-bb9387838fa3 STEP: Creating secret with name secret-projected-all-test-volume-b2eac63e-a701-420b-8544-3cfc45ff6516 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 19 14:02:58.541: INFO: Waiting up to 5m0s for pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3" in namespace "projected-1079" to be "Succeeded or Failed" Aug 19 14:02:59.218: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3": Phase="Pending", Reason="", readiness=false. Elapsed: 676.602591ms Aug 19 14:03:01.226: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.684825211s Aug 19 14:03:04.068: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.525902585s Aug 19 14:03:06.250: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.708365538s Aug 19 14:03:08.280: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.738337913s STEP: Saw pod success Aug 19 14:03:08.280: INFO: Pod "projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3" satisfied condition "Succeeded or Failed" Aug 19 14:03:08.286: INFO: Trying to get logs from node latest-worker2 pod projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3 container projected-all-volume-test: STEP: delete the pod Aug 19 14:03:09.034: INFO: Waiting for pod projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3 to disappear Aug 19 14:03:09.414: INFO: Pod projected-volume-94a3bf25-7e1d-4fa2-a90d-c55394f435d3 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:03:09.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1079" for this suite. • [SLOW TEST:12.194 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":37,"skipped":369,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:03:09.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 19 14:03:09.650: INFO: Waiting up to 5m0s for pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7" in namespace "emptydir-2035" to be "Succeeded or Failed" Aug 19 14:03:09.762: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 112.031273ms Aug 19 14:03:11.897: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246885659s Aug 19 14:03:13.903: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252776353s Aug 19 14:03:15.990: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339606573s Aug 19 14:03:18.624: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Running", Reason="", readiness=true. Elapsed: 8.973517132s Aug 19 14:03:20.789: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.138340071s STEP: Saw pod success Aug 19 14:03:20.789: INFO: Pod "pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7" satisfied condition "Succeeded or Failed" Aug 19 14:03:20.848: INFO: Trying to get logs from node latest-worker pod pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7 container test-container: STEP: delete the pod Aug 19 14:03:21.152: INFO: Waiting for pod pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7 to disappear Aug 19 14:03:21.159: INFO: Pod pod-7b66bea8-21d5-40ff-aff0-e306cc0adbd7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:03:21.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2035" for this suite. • [SLOW TEST:11.603 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":370,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:03:21.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 19 14:03:21.332: INFO: Waiting up to 5m0s for pod "pod-925a4486-2641-4118-af06-ec32609bafe4" in namespace "emptydir-557" to be "Succeeded or Failed" Aug 19 14:03:21.353: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.717678ms Aug 19 14:03:23.357: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024834039s Aug 19 14:03:25.626: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293768316s Aug 19 14:03:27.717: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384220962s Aug 19 14:03:30.379: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.046541837s STEP: Saw pod success Aug 19 14:03:30.379: INFO: Pod "pod-925a4486-2641-4118-af06-ec32609bafe4" satisfied condition "Succeeded or Failed" Aug 19 14:03:30.386: INFO: Trying to get logs from node latest-worker2 pod pod-925a4486-2641-4118-af06-ec32609bafe4 container test-container: STEP: delete the pod Aug 19 14:03:31.290: INFO: Waiting for pod pod-925a4486-2641-4118-af06-ec32609bafe4 to disappear Aug 19 14:03:31.331: INFO: Pod pod-925a4486-2641-4118-af06-ec32609bafe4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:03:31.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-557" for this suite. • [SLOW TEST:10.658 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:03:31.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:03:42.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6834" for this suite. • [SLOW TEST:10.884 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:03:42.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 19 14:03:50.485: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:03:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3011" for this suite. • [SLOW TEST:8.311 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":433,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:03:51.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 19 14:04:00.245: INFO: Successfully updated pod "annotationupdate918ccf33-48e7-4958-9de4-fd92b372d702" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:04:02.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2969" for this suite. • [SLOW TEST:11.403 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":438,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:04:02.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:04:15.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7229" for this suite. • [SLOW TEST:13.989 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:04:16.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:04:18.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9" in namespace "projected-8107" to be "Succeeded or Failed" Aug 19 14:04:18.373: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Pending", Reason="", readiness=false. Elapsed: 370.719748ms Aug 19 14:04:20.394: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391899325s Aug 19 14:04:22.415: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412355172s Aug 19 14:04:24.783: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780715591s Aug 19 14:04:27.029: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.026555423s Aug 19 14:04:29.109: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Running", Reason="", readiness=true. Elapsed: 11.106558041s Aug 19 14:04:31.274: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.271560642s STEP: Saw pod success Aug 19 14:04:31.274: INFO: Pod "downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9" satisfied condition "Succeeded or Failed" Aug 19 14:04:31.579: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9 container client-container: STEP: delete the pod Aug 19 14:04:31.848: INFO: Waiting for pod downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9 to disappear Aug 19 14:04:32.733: INFO: Pod downwardapi-volume-d1948520-9608-4574-ad10-f502610934c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:04:32.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8107" for this suite. • [SLOW TEST:16.520 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:04:32.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:04:40.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:04:43.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442681, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442679, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:04:45.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442681, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442679, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:04:47.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442680, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442681, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733442679, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:04:50.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:04:51.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1381" for this suite. STEP: Destroying namespace "webhook-1381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.546 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":45,"skipped":492,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:04:51.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:04:57.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3942" for this suite. • [SLOW TEST:5.631 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":46,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:04:57.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 19 14:04:57.249: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504747 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:04:57.251: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504747 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 19 14:05:07.266: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504799 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:05:07.267: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504799 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 19 14:05:17.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504848 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:05:17.299: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504848 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 19 14:05:27.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504892 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:05:27.309: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-a 20be4e93-34a7-40ac-b116-a468b2ddb2ad 1504892 0 2020-08-19 14:04:57 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:17 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 19 14:05:37.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-b c3ea0816-71e7-4eb9-9613-fb0a7df75c35 1504940 0 2020-08-19 14:05:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:05:37.319: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-b c3ea0816-71e7-4eb9-9613-fb0a7df75c35 1504940 0 2020-08-19 14:05:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 19 14:05:47.718: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-b c3ea0816-71e7-4eb9-9613-fb0a7df75c35 1504978 0 2020-08-19 14:05:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:05:47.719: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2366 /api/v1/namespaces/watch-2366/configmaps/e2e-watch-test-configmap-b c3ea0816-71e7-4eb9-9613-fb0a7df75c35 1504978 0 2020-08-19 14:05:37 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-19 14:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:05:57.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2366" for this suite. • [SLOW TEST:60.750 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":47,"skipped":536,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:05:57.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:05:58.370: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4" in namespace "projected-7504" to be "Succeeded or Failed" Aug 19 14:05:58.411: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.094017ms Aug 19 14:06:00.416: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045504509s Aug 19 14:06:02.952: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581077706s Aug 19 14:06:05.292: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.922021729s Aug 19 14:06:07.300: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.929062981s STEP: Saw pod success Aug 19 14:06:07.300: INFO: Pod "downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4" satisfied condition "Succeeded or Failed" Aug 19 14:06:07.305: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4 container client-container: STEP: delete the pod Aug 19 14:06:07.538: INFO: Waiting for pod downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4 to disappear Aug 19 14:06:07.620: INFO: Pod downwardapi-volume-3e5f1583-46e0-4696-8616-de01098165a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:06:07.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7504" for this suite. • [SLOW TEST:9.750 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":547,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:06:07.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 19 14:06:07.802: INFO: Waiting up to 5m0s for pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421" in namespace "emptydir-1475" to be "Succeeded or Failed" Aug 19 14:06:07.817: INFO: Pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421": Phase="Pending", Reason="", readiness=false. Elapsed: 15.181916ms Aug 19 14:06:10.862: INFO: Pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421": Phase="Pending", Reason="", readiness=false. Elapsed: 3.060295358s Aug 19 14:06:13.143: INFO: Pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421": Phase="Pending", Reason="", readiness=false. Elapsed: 5.341394783s Aug 19 14:06:15.149: INFO: Pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.347677759s STEP: Saw pod success Aug 19 14:06:15.150: INFO: Pod "pod-6ca984c0-a9f2-4089-927e-5ed0b7253421" satisfied condition "Succeeded or Failed" Aug 19 14:06:15.154: INFO: Trying to get logs from node latest-worker2 pod pod-6ca984c0-a9f2-4089-927e-5ed0b7253421 container test-container: STEP: delete the pod Aug 19 14:06:15.398: INFO: Waiting for pod pod-6ca984c0-a9f2-4089-927e-5ed0b7253421 to disappear Aug 19 14:06:15.463: INFO: Pod pod-6ca984c0-a9f2-4089-927e-5ed0b7253421 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:06:15.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1475" for this suite. • [SLOW TEST:7.848 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":49,"skipped":550,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:06:15.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:08:17.314: INFO: Deleting pod "var-expansion-9b3166ad-bd91-449f-bf2a-8e9efbc71a0f" in namespace "var-expansion-8857" Aug 19 14:08:17.511: INFO: Wait up to 5m0s for pod "var-expansion-9b3166ad-bd91-449f-bf2a-8e9efbc71a0f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:08:21.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8857" for this suite. • [SLOW TEST:126.135 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":50,"skipped":552,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:08:21.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7c84da64-ddbd-4197-8517-e9d142040cdd in namespace container-probe-3370 Aug 19 14:08:30.451: INFO: Started pod liveness-7c84da64-ddbd-4197-8517-e9d142040cdd in namespace container-probe-3370 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 14:08:30.456: INFO: Initial restart count of pod liveness-7c84da64-ddbd-4197-8517-e9d142040cdd is 0 Aug 19 14:08:57.697: INFO: Restart count of pod container-probe-3370/liveness-7c84da64-ddbd-4197-8517-e9d142040cdd is now 1 (27.241399748s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:08:57.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3370" for this suite. • [SLOW TEST:36.258 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":566,"failed":0} S ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:08:57.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:09:00.468: INFO: Checking APIGroup: apiregistration.k8s.io Aug 19 14:09:00.471: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Aug 19 14:09:00.471: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.471: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Aug 19 14:09:00.471: INFO: Checking APIGroup: extensions Aug 19 14:09:00.473: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Aug 19 14:09:00.473: INFO: Versions found [{extensions/v1beta1 v1beta1}] Aug 19 14:09:00.473: INFO: extensions/v1beta1 matches extensions/v1beta1 Aug 19 14:09:00.473: INFO: Checking APIGroup: apps Aug 19 14:09:00.475: INFO: PreferredVersion.GroupVersion: apps/v1 Aug 19 14:09:00.475: INFO: Versions found [{apps/v1 v1}] Aug 19 14:09:00.475: INFO: apps/v1 matches apps/v1 Aug 19 14:09:00.475: INFO: Checking APIGroup: events.k8s.io Aug 19 14:09:00.477: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Aug 19 14:09:00.477: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.477: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Aug 19 14:09:00.477: INFO: Checking APIGroup: authentication.k8s.io Aug 19 14:09:00.479: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Aug 19 14:09:00.479: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.479: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Aug 19 14:09:00.479: INFO: Checking APIGroup: authorization.k8s.io Aug 19 14:09:00.481: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Aug 19 14:09:00.481: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.481: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Aug 19 14:09:00.481: INFO: Checking APIGroup: autoscaling Aug 19 14:09:00.482: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Aug 19 14:09:00.482: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Aug 19 14:09:00.482: INFO: autoscaling/v1 matches autoscaling/v1 Aug 19 14:09:00.482: INFO: Checking APIGroup: batch Aug 19 14:09:00.484: INFO: PreferredVersion.GroupVersion: batch/v1 Aug 19 14:09:00.484: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Aug 19 14:09:00.484: INFO: batch/v1 matches batch/v1 Aug 19 14:09:00.484: INFO: Checking APIGroup: certificates.k8s.io Aug 19 14:09:00.485: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Aug 19 14:09:00.485: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.485: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Aug 19 14:09:00.486: INFO: Checking APIGroup: networking.k8s.io Aug 19 14:09:00.487: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Aug 19 14:09:00.487: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.487: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Aug 19 14:09:00.487: INFO: Checking APIGroup: policy Aug 19 14:09:00.488: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Aug 19 14:09:00.488: INFO: Versions found [{policy/v1beta1 v1beta1}] Aug 19 14:09:00.488: INFO: policy/v1beta1 matches policy/v1beta1 Aug 19 14:09:00.488: INFO: Checking APIGroup: rbac.authorization.k8s.io Aug 19 14:09:00.490: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Aug 19 14:09:00.490: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.490: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Aug 19 14:09:00.490: INFO: Checking APIGroup: storage.k8s.io Aug 19 14:09:00.492: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Aug 19 14:09:00.492: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.492: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Aug 19 14:09:00.492: INFO: Checking APIGroup: admissionregistration.k8s.io Aug 19 14:09:00.494: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Aug 19 14:09:00.494: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.494: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Aug 19 14:09:00.494: INFO: Checking APIGroup: apiextensions.k8s.io Aug 19 14:09:00.495: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Aug 19 14:09:00.495: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.495: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Aug 19 14:09:00.496: INFO: Checking APIGroup: scheduling.k8s.io Aug 19 14:09:00.497: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Aug 19 14:09:00.497: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.497: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Aug 19 14:09:00.497: INFO: Checking APIGroup: coordination.k8s.io Aug 19 14:09:00.499: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Aug 19 14:09:00.499: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.499: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Aug 19 14:09:00.499: INFO: Checking APIGroup: node.k8s.io Aug 19 14:09:00.501: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Aug 19 14:09:00.501: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.501: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Aug 19 14:09:00.501: INFO: Checking APIGroup: discovery.k8s.io Aug 19 14:09:00.503: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Aug 19 14:09:00.503: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Aug 19 14:09:00.503: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:09:00.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-4982" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":52,"skipped":567,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:09:00.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:09:50.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6824" for this suite. • [SLOW TEST:50.444 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":568,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:09:50.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ed54a9c2-6c0f-4fbd-9b67-a786a4e8264a STEP: Creating a pod to test consume secrets Aug 19 14:09:51.581: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121" in namespace "projected-9575" to be "Succeeded or Failed" Aug 19 14:09:51.754: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Pending", Reason="", readiness=false. Elapsed: 172.530395ms Aug 19 14:09:53.816: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235357504s Aug 19 14:09:55.936: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355341159s Aug 19 14:09:58.165: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583809131s Aug 19 14:10:00.405: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Running", Reason="", readiness=true. Elapsed: 8.823488942s Aug 19 14:10:02.413: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.831428204s STEP: Saw pod success Aug 19 14:10:02.413: INFO: Pod "pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121" satisfied condition "Succeeded or Failed" Aug 19 14:10:02.418: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121 container projected-secret-volume-test: STEP: delete the pod Aug 19 14:10:02.494: INFO: Waiting for pod pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121 to disappear Aug 19 14:10:02.499: INFO: Pod pod-projected-secrets-abf3e577-db3f-4cfc-b724-7cb1e6df5121 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:10:02.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9575" for this suite. • [SLOW TEST:11.551 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":584,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:10:02.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:10:13.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3637" for this suite. • [SLOW TEST:10.808 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:10:13.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 19 14:10:13.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9917' Aug 19 14:10:25.379: INFO: stderr: "" Aug 19 14:10:25.379: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 19 14:10:30.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9917 -o json' Aug 19 14:10:31.846: INFO: stderr: "" Aug 19 14:10:31.847: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-19T14:10:25Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-19T14:10:25Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.227\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-19T14:10:29Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9917\",\n \"resourceVersion\": \"1506273\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9917/pods/e2e-test-httpd-pod\",\n \"uid\": \"e37fa4b4-fe40-4da3-8e04-13ce3c7a7fe5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-px48f\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-px48f\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-px48f\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:10:25Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:10:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:10:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:10:25Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://2cfdcc73e9518c05e6c72b0414001b7ac1580511fd2d857c8c43b7dd9c61f184\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-19T14:10:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.227\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.227\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-19T14:10:25Z\"\n }\n}\n" STEP: replace the image in the pod Aug 19 14:10:31.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9917' Aug 19 14:10:36.609: INFO: stderr: "" Aug 19 14:10:36.609: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Aug 19 14:10:36.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9917' Aug 19 14:11:00.038: INFO: stderr: "" Aug 19 14:11:00.038: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:00.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9917" for this suite. • [SLOW TEST:46.907 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":56,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:00.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 19 14:11:00.381: INFO: Waiting up to 5m0s for pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98" in namespace "downward-api-5276" to be "Succeeded or Failed" Aug 19 14:11:00.400: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98": Phase="Pending", Reason="", readiness=false. Elapsed: 18.608887ms Aug 19 14:11:02.637: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255911091s Aug 19 14:11:04.645: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263248191s Aug 19 14:11:06.806: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98": Phase="Running", Reason="", readiness=true. Elapsed: 6.424664542s Aug 19 14:11:08.815: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.433373439s STEP: Saw pod success Aug 19 14:11:08.815: INFO: Pod "downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98" satisfied condition "Succeeded or Failed" Aug 19 14:11:08.820: INFO: Trying to get logs from node latest-worker pod downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98 container dapi-container: STEP: delete the pod Aug 19 14:11:08.861: INFO: Waiting for pod downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98 to disappear Aug 19 14:11:08.872: INFO: Pod downward-api-938a67e8-c262-40e8-bc93-eb5599bd7f98 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:08.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5276" for this suite. • [SLOW TEST:8.656 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":644,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:08.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4285.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4285.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4285.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4285.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4285.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4285.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 14:11:17.231: INFO: DNS probes using dns-4285/dns-test-98f7e9e0-8cde-400b-88e9-229f7640c80f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:17.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4285" for this suite. • [SLOW TEST:9.222 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":58,"skipped":650,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:18.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Aug 19 14:11:19.184: INFO: created test-pod-1 Aug 19 14:11:19.215: INFO: created test-pod-2 Aug 19 14:11:19.381: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:23.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4241" for this suite. • [SLOW TEST:5.005 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":59,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:23.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 19 14:11:29.714: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 19 14:11:32.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443090, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:11:34.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443090, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443089, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:11:37.879: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:11:37.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:40.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1216" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:17.865 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":60,"skipped":687,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:40.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 19 14:11:42.756: INFO: Waiting up to 5m0s for pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9" in namespace "containers-7870" to be "Succeeded or Failed" Aug 19 14:11:43.105: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 348.280108ms Aug 19 14:11:45.203: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.446930608s Aug 19 14:11:47.209: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452470193s Aug 19 14:11:49.215: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458233899s Aug 19 14:11:51.296: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Running", Reason="", readiness=true. Elapsed: 8.539853793s Aug 19 14:11:53.302: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.545272967s STEP: Saw pod success Aug 19 14:11:53.302: INFO: Pod "client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9" satisfied condition "Succeeded or Failed" Aug 19 14:11:53.305: INFO: Trying to get logs from node latest-worker2 pod client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9 container test-container: STEP: delete the pod Aug 19 14:11:53.513: INFO: Waiting for pod client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9 to disappear Aug 19 14:11:53.563: INFO: Pod client-containers-f0077296-f8da-4f13-ad7c-235689a12ee9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:11:53.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7870" for this suite. • [SLOW TEST:12.588 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":689,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:11:53.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-5d5d08c2-8cb0-4c56-8697-d6c122cffdc3 STEP: Creating a pod to test consume secrets Aug 19 14:11:53.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44" in namespace "projected-7876" to be "Succeeded or Failed" Aug 19 14:11:54.080: INFO: Pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44": Phase="Pending", Reason="", readiness=false. Elapsed: 238.46606ms Aug 19 14:11:56.086: INFO: Pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243906473s Aug 19 14:11:58.092: INFO: Pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249524976s Aug 19 14:12:00.134: INFO: Pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.292388428s STEP: Saw pod success Aug 19 14:12:00.135: INFO: Pod "pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44" satisfied condition "Succeeded or Failed" Aug 19 14:12:00.138: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44 container projected-secret-volume-test: STEP: delete the pod Aug 19 14:12:00.574: INFO: Waiting for pod pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44 to disappear Aug 19 14:12:00.611: INFO: Pod pod-projected-secrets-48c4efca-7c47-4216-bf45-009c86e1bf44 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:12:00.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7876" for this suite. • [SLOW TEST:7.052 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":701,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:12:00.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:12:00.781: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:12:03.333: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:12:04.789: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:12:07.041: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:12:08.816: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:10.789: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:12.897: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:14.788: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:16.824: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:18.859: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:20.789: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:22.806: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = false) Aug 19 14:12:24.787: INFO: The status of Pod test-webserver-8b1d34f1-4e16-40cd-bbbb-3a7b3ba5b99d is Running (Ready = true) Aug 19 14:12:24.793: INFO: Container started at 2020-08-19 14:12:07 +0000 UTC, pod became ready at 2020-08-19 14:12:22 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:12:24.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5408" for this suite. • [SLOW TEST:24.178 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:12:24.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:12:25.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a" in namespace "projected-7819" to be "Succeeded or Failed" Aug 19 14:12:25.104: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.067515ms Aug 19 14:12:27.110: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029374667s Aug 19 14:12:29.116: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035545701s Aug 19 14:12:31.121: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a": Phase="Running", Reason="", readiness=true. Elapsed: 6.0407353s Aug 19 14:12:33.241: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.160080284s STEP: Saw pod success Aug 19 14:12:33.241: INFO: Pod "downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a" satisfied condition "Succeeded or Failed" Aug 19 14:12:33.245: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a container client-container: STEP: delete the pod Aug 19 14:12:33.409: INFO: Waiting for pod downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a to disappear Aug 19 14:12:33.488: INFO: Pod downwardapi-volume-0033d93c-9d30-435e-961a-9d35a485318a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:12:33.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7819" for this suite. • [SLOW TEST:8.693 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":760,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:12:33.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:12:40.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3989" for this suite. • [SLOW TEST:7.498 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":65,"skipped":769,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:12:41.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3795 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3795 I0819 14:12:41.221327 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3795, replica count: 2 I0819 14:12:44.272974 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:12:47.273402 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:12:50.274391 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 14:12:50.275: INFO: Creating new exec pod Aug 19 14:12:57.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3795 execpodl2djn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 19 14:12:59.868: INFO: stderr: "I0819 14:12:59.741843 583 log.go:181] (0x4000f842c0) (0x4000840e60) Create stream\nI0819 14:12:59.746204 583 log.go:181] (0x4000f842c0) (0x4000840e60) Stream added, broadcasting: 1\nI0819 14:12:59.761446 583 log.go:181] (0x4000f842c0) Reply frame received for 1\nI0819 14:12:59.762432 583 log.go:181] (0x4000f842c0) (0x4000aaf220) Create stream\nI0819 14:12:59.762585 583 log.go:181] (0x4000f842c0) (0x4000aaf220) Stream added, broadcasting: 3\nI0819 14:12:59.764101 583 log.go:181] (0x4000f842c0) Reply frame received for 3\nI0819 14:12:59.764419 583 log.go:181] (0x4000f842c0) (0x4000444a00) Create stream\nI0819 14:12:59.764483 583 log.go:181] (0x4000f842c0) (0x4000444a00) Stream added, broadcasting: 5\nI0819 14:12:59.765935 583 log.go:181] (0x4000f842c0) Reply frame received for 5\nI0819 14:12:59.847029 583 log.go:181] (0x4000f842c0) Data frame received for 3\nI0819 14:12:59.847253 583 log.go:181] (0x4000aaf220) (3) Data frame handling\nI0819 14:12:59.847586 583 log.go:181] (0x4000f842c0) Data frame received for 1\nI0819 14:12:59.847679 583 log.go:181] (0x4000840e60) (1) Data frame handling\nI0819 14:12:59.847792 583 log.go:181] (0x4000f842c0) Data frame received for 5\nI0819 14:12:59.847894 583 log.go:181] (0x4000444a00) (5) Data frame handling\nI0819 14:12:59.849201 583 log.go:181] (0x4000444a00) (5) Data frame sent\nI0819 14:12:59.849355 583 log.go:181] (0x4000840e60) (1) Data frame sent\nI0819 14:12:59.849564 583 log.go:181] (0x4000f842c0) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0819 14:12:59.849623 583 log.go:181] (0x4000444a00) (5) Data frame handling\nI0819 14:12:59.850676 583 log.go:181] (0x4000f842c0) (0x4000840e60) Stream removed, broadcasting: 1\nI0819 14:12:59.852587 583 log.go:181] (0x4000f842c0) Go away received\nI0819 14:12:59.855509 583 log.go:181] (0x4000f842c0) (0x4000840e60) Stream removed, broadcasting: 1\nI0819 14:12:59.855807 583 log.go:181] (0x4000f842c0) (0x4000aaf220) Stream removed, broadcasting: 3\nI0819 14:12:59.855981 583 log.go:181] (0x4000f842c0) (0x4000444a00) Stream removed, broadcasting: 5\n" Aug 19 14:12:59.868: INFO: stdout: "" Aug 19 14:12:59.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3795 execpodl2djn -- /bin/sh -x -c nc -zv -t -w 2 10.110.127.25 80' Aug 19 14:13:01.442: INFO: stderr: "I0819 14:13:01.344096 604 log.go:181] (0x40001720b0) (0x4000be0000) Create stream\nI0819 14:13:01.348023 604 log.go:181] (0x40001720b0) (0x4000be0000) Stream added, broadcasting: 1\nI0819 14:13:01.361029 604 log.go:181] (0x40001720b0) Reply frame received for 1\nI0819 14:13:01.362283 604 log.go:181] (0x40001720b0) (0x4000a40000) Create stream\nI0819 14:13:01.362437 604 log.go:181] (0x40001720b0) (0x4000a40000) Stream added, broadcasting: 3\nI0819 14:13:01.364508 604 log.go:181] (0x40001720b0) Reply frame received for 3\nI0819 14:13:01.364928 604 log.go:181] (0x40001720b0) (0x4000be00a0) Create stream\nI0819 14:13:01.365029 604 log.go:181] (0x40001720b0) (0x4000be00a0) Stream added, broadcasting: 5\nI0819 14:13:01.367009 604 log.go:181] (0x40001720b0) Reply frame received for 5\nI0819 14:13:01.422553 604 log.go:181] (0x40001720b0) Data frame received for 5\nI0819 14:13:01.422906 604 log.go:181] (0x4000be00a0) (5) Data frame handling\nI0819 14:13:01.423066 604 log.go:181] (0x40001720b0) Data frame received for 1\nI0819 14:13:01.423168 604 log.go:181] (0x4000be0000) (1) Data frame handling\nI0819 14:13:01.423268 604 log.go:181] (0x40001720b0) Data frame received for 3\nI0819 14:13:01.423374 604 log.go:181] (0x4000a40000) (3) Data frame handling\n+ nc -zv -t -w 2 10.110.127.25 80\nConnection to 10.110.127.25 80 port [tcp/http] succeeded!\nI0819 14:13:01.425180 604 log.go:181] (0x4000be00a0) (5) Data frame sent\nI0819 14:13:01.425341 604 log.go:181] (0x40001720b0) Data frame received for 5\nI0819 14:13:01.425408 604 log.go:181] (0x4000be00a0) (5) Data frame handling\nI0819 14:13:01.425565 604 log.go:181] (0x4000be0000) (1) Data frame sent\nI0819 14:13:01.426664 604 log.go:181] (0x40001720b0) (0x4000be0000) Stream removed, broadcasting: 1\nI0819 14:13:01.428449 604 log.go:181] (0x40001720b0) Go away received\nI0819 14:13:01.432155 604 log.go:181] (0x40001720b0) (0x4000be0000) Stream removed, broadcasting: 1\nI0819 14:13:01.432592 604 log.go:181] (0x40001720b0) (0x4000a40000) Stream removed, broadcasting: 3\nI0819 14:13:01.432941 604 log.go:181] (0x40001720b0) (0x4000be00a0) Stream removed, broadcasting: 5\n" Aug 19 14:13:01.443: INFO: stdout: "" Aug 19 14:13:01.444: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:13:01.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3795" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.509 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":66,"skipped":770,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:13:01.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:13:01.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6801" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":67,"skipped":782,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:13:02.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:13:02.857: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:13:03.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4069" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":68,"skipped":790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:13:03.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 19 14:13:03.742: INFO: PodSpec: initContainers in spec.initContainers Aug 19 14:14:04.542: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4d5882f-6335-4257-a069-eed5506bfd39", GenerateName:"", Namespace:"init-container-2592", SelfLink:"/api/v1/namespaces/init-container-2592/pods/pod-init-a4d5882f-6335-4257-a069-eed5506bfd39", UID:"febd309b-15a9-4f3c-a3a6-b16dca9f931f", ResourceVersion:"1507340", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733443183, loc:(*time.Location)(0x6e4f160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"741835475"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003ab65c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003ab65e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003ab6600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003ab6620)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5kngw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4002cc4780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5kngw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5kngw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5kngw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40033e5de8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40021cb030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40033e5e70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40033e5e90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40033e5e98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40033e5e9c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x400265b560), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443184, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443184, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443184, loc:(*time.Location)(0x6e4f160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443183, loc:(*time.Location)(0x6e4f160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.11", PodIP:"10.244.2.233", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.233"}}, StartTime:(*v1.Time)(0x4003ab6640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40021cb110)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40021cb180)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://9be1b585f50113556a3da143743d828fdc04a0821a9914be2ac65708a215330c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003ab6680), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003ab6660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x40033e5f1f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:14:04.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2592" for this suite. • [SLOW TEST:61.452 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":69,"skipped":813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:14:05.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 19 14:14:05.279: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:16:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6256" for this suite. • [SLOW TEST:117.610 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":70,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:16:02.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:16:02.775: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 19 14:16:04.263: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:16:04.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5676" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":71,"skipped":903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:16:04.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 19 14:16:05.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7773' Aug 19 14:16:06.637: INFO: stderr: "" Aug 19 14:16:06.637: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 19 14:16:06.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7773' Aug 19 14:16:08.073: INFO: stderr: "" Aug 19 14:16:08.074: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-19T14:16:06Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-19T14:16:06Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-19T14:16:07Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7773\",\n \"resourceVersion\": \"1507755\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7773/pods/e2e-test-httpd-pod\",\n \"uid\": \"dec307cf-684f-4b9d-a58c-49283e30c614\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6s5n8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6s5n8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6s5n8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:16:07Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:16:07Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:16:07Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-19T14:16:06Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.14\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-19T14:16:07Z\"\n }\n}\n" Aug 19 14:16:08.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7773' Aug 19 14:16:10.934: INFO: stderr: "W0819 14:16:09.083197 665 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 19 14:16:10.934: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 19 14:16:11.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7773' Aug 19 14:16:20.797: INFO: stderr: "" Aug 19 14:16:20.797: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:16:20.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7773" for this suite. • [SLOW TEST:16.526 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":72,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:16:21.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 19 14:16:35.136: INFO: Successfully updated pod "labelsupdate97a2cbf2-c45c-447f-afb6-58fc0879df75" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:16:37.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4449" for this suite. • [SLOW TEST:16.650 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":995,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:16:37.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-eef585f2-c761-49fe-9b8c-102d2290f457 STEP: Creating a pod to test consume secrets Aug 19 14:16:38.415: INFO: Waiting up to 5m0s for pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941" in namespace "secrets-1836" to be "Succeeded or Failed" Aug 19 14:16:38.627: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 211.243498ms Aug 19 14:16:40.942: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 2.52621312s Aug 19 14:16:42.947: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53134547s Aug 19 14:16:45.019: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 6.603800389s Aug 19 14:16:47.969: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 9.553622369s Aug 19 14:16:50.369: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Pending", Reason="", readiness=false. Elapsed: 11.953449768s Aug 19 14:16:52.486: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.070349419s STEP: Saw pod success Aug 19 14:16:52.486: INFO: Pod "pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941" satisfied condition "Succeeded or Failed" Aug 19 14:16:52.491: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941 container secret-volume-test: STEP: delete the pod Aug 19 14:16:52.861: INFO: Waiting for pod pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941 to disappear Aug 19 14:16:52.890: INFO: Pod pod-secrets-e1bdee14-b465-41c2-8f59-b0a76b741941 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:16:52.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1836" for this suite. • [SLOW TEST:15.025 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:16:52.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Aug 19 14:16:53.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9612' Aug 19 14:16:57.646: INFO: stderr: "" Aug 19 14:16:57.646: INFO: stdout: "pod/pause created\n" Aug 19 14:16:57.647: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 19 14:16:57.647: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9612" to be "running and ready" Aug 19 14:16:57.658: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.003741ms Aug 19 14:16:59.811: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1637397s Aug 19 14:17:01.820: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.17244471s Aug 19 14:17:01.820: INFO: Pod "pause" satisfied condition "running and ready" Aug 19 14:17:01.821: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 19 14:17:01.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9612' Aug 19 14:17:03.247: INFO: stderr: "" Aug 19 14:17:03.247: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 19 14:17:03.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9612' Aug 19 14:17:04.608: INFO: stderr: "" Aug 19 14:17:04.609: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 19 14:17:04.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9612' Aug 19 14:17:06.002: INFO: stderr: "" Aug 19 14:17:06.002: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 19 14:17:06.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9612' Aug 19 14:17:07.590: INFO: stderr: "" Aug 19 14:17:07.590: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Aug 19 14:17:07.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9612' Aug 19 14:17:09.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 14:17:09.042: INFO: stdout: "pod \"pause\" force deleted\n" Aug 19 14:17:09.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9612' Aug 19 14:17:10.444: INFO: stderr: "No resources found in kubectl-9612 namespace.\n" Aug 19 14:17:10.444: INFO: stdout: "" Aug 19 14:17:10.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9612 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 19 14:17:11.904: INFO: stderr: "" Aug 19 14:17:11.904: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:17:11.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9612" for this suite. • [SLOW TEST:19.009 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":75,"skipped":1062,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:17:11.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:17:12.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8597" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":76,"skipped":1104,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:17:12.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 19 14:17:18.542: INFO: Pod pod-hostip-59aab47e-136d-4129-aa1f-d04108dafd07 has hostIP: 172.18.0.11 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:17:18.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2230" for this suite. • [SLOW TEST:6.331 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:17:18.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:17:22.716: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:17:24.734: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:17:26.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443442, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:17:29.979: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:17:29.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5525-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:17:31.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2013" for this suite. STEP: Destroying namespace "webhook-2013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.067 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":78,"skipped":1186,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:17:31.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:17:36.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:17:38.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:17:40.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443456, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:17:43.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:17:55.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5787" for this suite. STEP: Destroying namespace "webhook-5787-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.336 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":79,"skipped":1206,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:17:55.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:17:56.096: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-14f254f7-675a-481d-b202-01517b23885b" in namespace "security-context-test-9365" to be "Succeeded or Failed" Aug 19 14:17:56.123: INFO: Pod "busybox-readonly-false-14f254f7-675a-481d-b202-01517b23885b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.84583ms Aug 19 14:17:58.131: INFO: Pod "busybox-readonly-false-14f254f7-675a-481d-b202-01517b23885b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03449657s Aug 19 14:18:00.137: INFO: Pod "busybox-readonly-false-14f254f7-675a-481d-b202-01517b23885b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040986654s Aug 19 14:18:00.137: INFO: Pod "busybox-readonly-false-14f254f7-675a-481d-b202-01517b23885b" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:18:00.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9365" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:18:00.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:18:00.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6904" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":81,"skipped":1246,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:18:00.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 19 14:18:00.924: INFO: Waiting up to 1m0s for all nodes to be ready Aug 19 14:19:01.006: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 19 14:19:01.067: INFO: Created pod: pod0-sched-preemption-low-priority Aug 19 14:19:01.155: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:19:37.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3690" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:96.696 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":82,"skipped":1255,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:19:37.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 19 14:19:37.650: INFO: Waiting up to 1m0s for all nodes to be ready Aug 19 14:20:37.712: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 19 14:20:37.749: INFO: Created pod: pod0-sched-preemption-low-priority Aug 19 14:20:37.842: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:20:59.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-982" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:83.788 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":83,"skipped":1290,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:21:01.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:21:12.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5987" for this suite. • [SLOW TEST:11.114 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1293,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:21:12.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:21:16.732: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:21:18.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443677, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:21:20.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443677, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443676, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:21:24.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:21:25.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3844" for this suite. STEP: Destroying namespace "webhook-3844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.853 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":85,"skipped":1300,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:21:26.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-43bfe51f-540e-42d2-aec0-f46db8c9b23f STEP: Creating a pod to test consume configMaps Aug 19 14:21:26.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395" in namespace "projected-788" to be "Succeeded or Failed" Aug 19 14:21:26.793: INFO: Pod "pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395": Phase="Pending", Reason="", readiness=false. Elapsed: 132.223118ms Aug 19 14:21:28.799: INFO: Pod "pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138656781s Aug 19 14:21:30.815: INFO: Pod "pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15441159s STEP: Saw pod success Aug 19 14:21:30.815: INFO: Pod "pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395" satisfied condition "Succeeded or Failed" Aug 19 14:21:31.001: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395 container projected-configmap-volume-test: STEP: delete the pod Aug 19 14:21:31.189: INFO: Waiting for pod pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395 to disappear Aug 19 14:21:31.275: INFO: Pod pod-projected-configmaps-ef9f1257-d4f4-4e67-946b-d123e82ba395 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:21:31.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-788" for this suite. • [SLOW TEST:5.060 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1301,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:21:31.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:21:31.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de" in namespace "projected-4094" to be "Succeeded or Failed" Aug 19 14:21:31.487: INFO: Pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de": Phase="Pending", Reason="", readiness=false. Elapsed: 34.007714ms Aug 19 14:21:33.494: INFO: Pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041673476s Aug 19 14:21:35.845: INFO: Pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de": Phase="Running", Reason="", readiness=true. Elapsed: 4.392207909s Aug 19 14:21:38.079: INFO: Pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.626582222s STEP: Saw pod success Aug 19 14:21:38.080: INFO: Pod "downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de" satisfied condition "Succeeded or Failed" Aug 19 14:21:38.152: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de container client-container: STEP: delete the pod Aug 19 14:21:38.404: INFO: Waiting for pod downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de to disappear Aug 19 14:21:38.461: INFO: Pod downwardapi-volume-327e82c4-c6c5-45c3-b0e7-1ef3cc12b0de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:21:38.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4094" for this suite. • [SLOW TEST:7.284 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:21:38.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2709 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 19 14:21:38.980: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 19 14:21:39.893: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:21:42.197: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:21:43.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:21:46.018: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:21:47.899: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:50.456: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:51.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:53.911: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:55.929: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:57.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:21:59.971: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:22:01.915: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 19 14:22:01.923: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 14:22:04.220: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 14:22:05.930: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 19 14:22:16.157: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.243 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2709 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:22:16.158: INFO: >>> kubeConfig: /root/.kube/config I0819 14:22:16.218455 10 log.go:181] (0x4000850160) (0x400064a5a0) Create stream I0819 14:22:16.218614 10 log.go:181] (0x4000850160) (0x400064a5a0) Stream added, broadcasting: 1 I0819 14:22:16.221804 10 log.go:181] (0x4000850160) Reply frame received for 1 I0819 14:22:16.222028 10 log.go:181] (0x4000850160) (0x400064a640) Create stream I0819 14:22:16.222150 10 log.go:181] (0x4000850160) (0x400064a640) Stream added, broadcasting: 3 I0819 14:22:16.223654 10 log.go:181] (0x4000850160) Reply frame received for 3 I0819 14:22:16.223823 10 log.go:181] (0x4000850160) (0x4003e001e0) Create stream I0819 14:22:16.223898 10 log.go:181] (0x4000850160) (0x4003e001e0) Stream added, broadcasting: 5 I0819 14:22:16.225650 10 log.go:181] (0x4000850160) Reply frame received for 5 I0819 14:22:17.290779 10 log.go:181] (0x4000850160) Data frame received for 5 I0819 14:22:17.290943 10 log.go:181] (0x4003e001e0) (5) Data frame handling I0819 14:22:17.291122 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 14:22:17.291323 10 log.go:181] (0x400064a640) (3) Data frame handling I0819 14:22:17.291496 10 log.go:181] (0x400064a640) (3) Data frame sent I0819 14:22:17.291603 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 14:22:17.291696 10 log.go:181] (0x400064a640) (3) Data frame handling I0819 14:22:17.293293 10 log.go:181] (0x4000850160) Data frame received for 1 I0819 14:22:17.293480 10 log.go:181] (0x400064a5a0) (1) Data frame handling I0819 14:22:17.293644 10 log.go:181] (0x400064a5a0) (1) Data frame sent I0819 14:22:17.293852 10 log.go:181] (0x4000850160) (0x400064a5a0) Stream removed, broadcasting: 1 I0819 14:22:17.294082 10 log.go:181] (0x4000850160) Go away received I0819 14:22:17.294441 10 log.go:181] (0x4000850160) (0x400064a5a0) Stream removed, broadcasting: 1 I0819 14:22:17.294633 10 log.go:181] (0x4000850160) (0x400064a640) Stream removed, broadcasting: 3 I0819 14:22:17.294822 10 log.go:181] (0x4000850160) (0x4003e001e0) Stream removed, broadcasting: 5 Aug 19 14:22:17.295: INFO: Found all expected endpoints: [netserver-0] Aug 19 14:22:17.301: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.242 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2709 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:22:17.301: INFO: >>> kubeConfig: /root/.kube/config I0819 14:22:17.358786 10 log.go:181] (0x4001a520b0) (0x40026cc640) Create stream I0819 14:22:17.358965 10 log.go:181] (0x4001a520b0) (0x40026cc640) Stream added, broadcasting: 1 I0819 14:22:17.362310 10 log.go:181] (0x4001a520b0) Reply frame received for 1 I0819 14:22:17.362470 10 log.go:181] (0x4001a520b0) (0x4003e00280) Create stream I0819 14:22:17.362551 10 log.go:181] (0x4001a520b0) (0x4003e00280) Stream added, broadcasting: 3 I0819 14:22:17.364104 10 log.go:181] (0x4001a520b0) Reply frame received for 3 I0819 14:22:17.364249 10 log.go:181] (0x4001a520b0) (0x40026cc6e0) Create stream I0819 14:22:17.364328 10 log.go:181] (0x4001a520b0) (0x40026cc6e0) Stream added, broadcasting: 5 I0819 14:22:17.365631 10 log.go:181] (0x4001a520b0) Reply frame received for 5 I0819 14:22:18.440475 10 log.go:181] (0x4001a520b0) Data frame received for 3 I0819 14:22:18.440707 10 log.go:181] (0x4003e00280) (3) Data frame handling I0819 14:22:18.441088 10 log.go:181] (0x4001a520b0) Data frame received for 5 I0819 14:22:18.441393 10 log.go:181] (0x40026cc6e0) (5) Data frame handling I0819 14:22:18.441975 10 log.go:181] (0x4003e00280) (3) Data frame sent I0819 14:22:18.442399 10 log.go:181] (0x4001a520b0) Data frame received for 3 I0819 14:22:18.442557 10 log.go:181] (0x4003e00280) (3) Data frame handling I0819 14:22:18.442749 10 log.go:181] (0x4001a520b0) Data frame received for 1 I0819 14:22:18.442916 10 log.go:181] (0x40026cc640) (1) Data frame handling I0819 14:22:18.443059 10 log.go:181] (0x40026cc640) (1) Data frame sent I0819 14:22:18.443200 10 log.go:181] (0x4001a520b0) (0x40026cc640) Stream removed, broadcasting: 1 I0819 14:22:18.443358 10 log.go:181] (0x4001a520b0) Go away received I0819 14:22:18.443720 10 log.go:181] (0x4001a520b0) (0x40026cc640) Stream removed, broadcasting: 1 I0819 14:22:18.443925 10 log.go:181] (0x4001a520b0) (0x4003e00280) Stream removed, broadcasting: 3 I0819 14:22:18.444075 10 log.go:181] (0x4001a520b0) (0x40026cc6e0) Stream removed, broadcasting: 5 Aug 19 14:22:18.444: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:22:18.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2709" for this suite. • [SLOW TEST:39.854 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1334,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:22:18.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 19 14:22:18.519: INFO: Waiting up to 5m0s for pod "pod-97745576-771b-473d-86c2-4e124743f34b" in namespace "emptydir-3589" to be "Succeeded or Failed" Aug 19 14:22:18.548: INFO: Pod "pod-97745576-771b-473d-86c2-4e124743f34b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.035494ms Aug 19 14:22:20.560: INFO: Pod "pod-97745576-771b-473d-86c2-4e124743f34b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04042952s Aug 19 14:22:22.566: INFO: Pod "pod-97745576-771b-473d-86c2-4e124743f34b": Phase="Running", Reason="", readiness=true. Elapsed: 4.046337805s Aug 19 14:22:24.585: INFO: Pod "pod-97745576-771b-473d-86c2-4e124743f34b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065361574s STEP: Saw pod success Aug 19 14:22:24.585: INFO: Pod "pod-97745576-771b-473d-86c2-4e124743f34b" satisfied condition "Succeeded or Failed" Aug 19 14:22:24.679: INFO: Trying to get logs from node latest-worker2 pod pod-97745576-771b-473d-86c2-4e124743f34b container test-container: STEP: delete the pod Aug 19 14:22:25.113: INFO: Waiting for pod pod-97745576-771b-473d-86c2-4e124743f34b to disappear Aug 19 14:22:25.302: INFO: Pod pod-97745576-771b-473d-86c2-4e124743f34b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:22:25.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3589" for this suite. • [SLOW TEST:6.905 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1338,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:22:25.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 19 14:22:32.542: INFO: Successfully updated pod "pod-update-activedeadlineseconds-44ab1705-b88c-4639-b2f5-fd04ebcf7f32" Aug 19 14:22:32.542: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-44ab1705-b88c-4639-b2f5-fd04ebcf7f32" in namespace "pods-6061" to be "terminated due to deadline exceeded" Aug 19 14:22:32.587: INFO: Pod "pod-update-activedeadlineseconds-44ab1705-b88c-4639-b2f5-fd04ebcf7f32": Phase="Running", Reason="", readiness=true. Elapsed: 44.545621ms Aug 19 14:22:34.592: INFO: Pod "pod-update-activedeadlineseconds-44ab1705-b88c-4639-b2f5-fd04ebcf7f32": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.049947791s Aug 19 14:22:34.593: INFO: Pod "pod-update-activedeadlineseconds-44ab1705-b88c-4639-b2f5-fd04ebcf7f32" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:22:34.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6061" for this suite. • [SLOW TEST:9.238 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1354,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:22:34.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:22:34.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1872" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":91,"skipped":1356,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:22:34.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:22:39.739: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:22:41.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:22:44.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443759, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:22:47.487: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:22:47.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-433" for this suite. STEP: Destroying namespace "webhook-433-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.201 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":92,"skipped":1362,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:22:49.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:23:50.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6692" for this suite. • [SLOW TEST:61.213 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1366,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:23:50.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:23:51.737: INFO: Creating deployment "test-recreate-deployment" Aug 19 14:23:51.802: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 19 14:23:52.026: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 19 14:23:54.088: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 19 14:23:54.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443831, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:23:56.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443831, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:23:58.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443831, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:24:00.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443832, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443831, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:24:02.591: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 19 14:24:03.027: INFO: Updating deployment test-recreate-deployment Aug 19 14:24:03.028: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 19 14:24:05.922: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2791 /apis/apps/v1/namespaces/deployment-2791/deployments/test-recreate-deployment 53cf1104-53a8-4b14-9de9-7b9e419c0611 1509955 2 2020-08-19 14:23:51 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-19 14:24:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 14:24:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4001ebc0f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-19 14:24:05 +0000 UTC,LastTransitionTime:2020-08-19 14:24:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-19 14:24:05 +0000 UTC,LastTransitionTime:2020-08-19 14:23:51 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 19 14:24:06.084: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-2791 /apis/apps/v1/namespaces/deployment-2791/replicasets/test-recreate-deployment-f79dd4667 93486de3-2434-40de-ab4d-2a1a13ae2dba 1509953 1 2020-08-19 14:24:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 53cf1104-53a8-4b14-9de9-7b9e419c0611 0x4003cda1f0 0x4003cda1f1}] [] [{kube-controller-manager Update apps/v1 2020-08-19 14:24:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53cf1104-53a8-4b14-9de9-7b9e419c0611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003cda268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 14:24:06.084: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 19 14:24:06.085: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-2791 /apis/apps/v1/namespaces/deployment-2791/replicasets/test-recreate-deployment-c96cf48f 91cedd98-ca12-44ea-b017-0854ee139a77 1509943 2 2020-08-19 14:23:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 53cf1104-53a8-4b14-9de9-7b9e419c0611 0x4003cda0ef 0x4003cda100}] [] [{kube-controller-manager Update apps/v1 2020-08-19 14:24:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53cf1104-53a8-4b14-9de9-7b9e419c0611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003cda188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 14:24:06.458: INFO: Pod "test-recreate-deployment-f79dd4667-9xcsh" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-9xcsh test-recreate-deployment-f79dd4667- deployment-2791 /api/v1/namespaces/deployment-2791/pods/test-recreate-deployment-f79dd4667-9xcsh 86f24d53-1db3-45ce-b5db-22f0d748c921 1509957 0 2020-08-19 14:24:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 93486de3-2434-40de-ab4d-2a1a13ae2dba 0x4003cda740 0x4003cda741}] [] [{kube-controller-manager Update v1 2020-08-19 14:24:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"93486de3-2434-40de-ab4d-2a1a13ae2dba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:24:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jld9x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jld9x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jld9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:24:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:24:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:24:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:24:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:24:06.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2791" for this suite. • [SLOW TEST:16.493 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":94,"skipped":1388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:24:06.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:24:12.021: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:24:14.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443851, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:24:16.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443852, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733443851, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:24:19.120: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 19 14:24:25.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config attach --namespace=webhook-2579 to-be-attached-pod -i -c=container1' Aug 19 14:24:30.861: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:24:30.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2579" for this suite. STEP: Destroying namespace "webhook-2579-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.234 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":95,"skipped":1414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:24:30.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5953 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5953 STEP: creating replication controller externalsvc in namespace services-5953 I0819 14:24:31.256444 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5953, replica count: 2 I0819 14:24:34.307715 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:24:37.308352 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:24:40.309277 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 19 14:24:40.374: INFO: Creating new exec pod Aug 19 14:24:48.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5953 execpodv2kvb -- /bin/sh -x -c nslookup nodeport-service.services-5953.svc.cluster.local' Aug 19 14:24:49.937: INFO: stderr: "I0819 14:24:49.813549 889 log.go:181] (0x40006b0000) (0x400097e000) Create stream\nI0819 14:24:49.818438 889 log.go:181] (0x40006b0000) (0x400097e000) Stream added, broadcasting: 1\nI0819 14:24:49.829093 889 log.go:181] (0x40006b0000) Reply frame received for 1\nI0819 14:24:49.829857 889 log.go:181] (0x40006b0000) (0x4000d16280) Create stream\nI0819 14:24:49.829943 889 log.go:181] (0x40006b0000) (0x4000d16280) Stream added, broadcasting: 3\nI0819 14:24:49.831295 889 log.go:181] (0x40006b0000) Reply frame received for 3\nI0819 14:24:49.831502 889 log.go:181] (0x40006b0000) (0x400097e140) Create stream\nI0819 14:24:49.831544 889 log.go:181] (0x40006b0000) (0x400097e140) Stream added, broadcasting: 5\nI0819 14:24:49.832518 889 log.go:181] (0x40006b0000) Reply frame received for 5\nI0819 14:24:49.901157 889 log.go:181] (0x40006b0000) Data frame received for 5\nI0819 14:24:49.901533 889 log.go:181] (0x400097e140) (5) Data frame handling\nI0819 14:24:49.902400 889 log.go:181] (0x400097e140) (5) Data frame sent\n+ nslookup nodeport-service.services-5953.svc.cluster.local\nI0819 14:24:49.909019 889 log.go:181] (0x40006b0000) Data frame received for 3\nI0819 14:24:49.909072 889 log.go:181] (0x4000d16280) (3) Data frame handling\nI0819 14:24:49.909118 889 log.go:181] (0x4000d16280) (3) Data frame sent\nI0819 14:24:49.910089 889 log.go:181] (0x40006b0000) Data frame received for 3\nI0819 14:24:49.910191 889 log.go:181] (0x4000d16280) (3) Data frame handling\nI0819 14:24:49.910348 889 log.go:181] (0x4000d16280) (3) Data frame sent\nI0819 14:24:49.910803 889 log.go:181] (0x40006b0000) Data frame received for 5\nI0819 14:24:49.910896 889 log.go:181] (0x400097e140) (5) Data frame handling\nI0819 14:24:49.911272 889 log.go:181] (0x40006b0000) Data frame received for 3\nI0819 14:24:49.911361 889 log.go:181] (0x4000d16280) (3) Data frame handling\nI0819 14:24:49.912592 889 log.go:181] (0x40006b0000) Data frame received for 1\nI0819 14:24:49.912676 889 log.go:181] (0x400097e000) (1) Data frame handling\nI0819 14:24:49.912839 889 log.go:181] (0x400097e000) (1) Data frame sent\nI0819 14:24:49.914175 889 log.go:181] (0x40006b0000) (0x400097e000) Stream removed, broadcasting: 1\nI0819 14:24:49.917419 889 log.go:181] (0x40006b0000) Go away received\nI0819 14:24:49.923604 889 log.go:181] (0x40006b0000) (0x400097e000) Stream removed, broadcasting: 1\nI0819 14:24:49.924137 889 log.go:181] (0x40006b0000) (0x4000d16280) Stream removed, broadcasting: 3\nI0819 14:24:49.924492 889 log.go:181] (0x40006b0000) (0x400097e140) Stream removed, broadcasting: 5\n" Aug 19 14:24:49.938: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5953.svc.cluster.local\tcanonical name = externalsvc.services-5953.svc.cluster.local.\nName:\texternalsvc.services-5953.svc.cluster.local\nAddress: 10.101.156.169\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5953, will wait for the garbage collector to delete the pods Aug 19 14:24:49.999: INFO: Deleting ReplicationController externalsvc took: 6.195757ms Aug 19 14:24:50.399: INFO: Terminating ReplicationController externalsvc pods took: 400.84213ms Aug 19 14:24:59.715: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:24:59.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5953" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.773 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":96,"skipped":1461,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:24:59.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 19 14:25:12.579: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 19 14:25:12.671: INFO: Pod pod-with-poststart-exec-hook still exists Aug 19 14:25:14.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 19 14:25:14.692: INFO: Pod pod-with-poststart-exec-hook still exists Aug 19 14:25:16.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 19 14:25:16.677: INFO: Pod pod-with-poststart-exec-hook still exists Aug 19 14:25:18.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 19 14:25:18.676: INFO: Pod pod-with-poststart-exec-hook still exists Aug 19 14:25:20.671: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 19 14:25:20.677: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:25:20.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8415" for this suite. • [SLOW TEST:20.937 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1478,"failed":0} [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:25:20.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 19 14:25:20.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4677 /api/v1/namespaces/watch-4677/configmaps/e2e-watch-test-resource-version 1a252d4f-324a-4964-a1ec-9b6cc814efb7 1510403 0 2020-08-19 14:25:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-19 14:25:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 14:25:20.809: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4677 /api/v1/namespaces/watch-4677/configmaps/e2e-watch-test-resource-version 1a252d4f-324a-4964-a1ec-9b6cc814efb7 1510404 0 2020-08-19 14:25:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-19 14:25:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:25:20.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4677" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":98,"skipped":1478,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:25:20.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:25:20.883: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 19 14:25:42.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 create -f -' Aug 19 14:25:49.604: INFO: stderr: "" Aug 19 14:25:49.604: INFO: stdout: "e2e-test-crd-publish-openapi-8284-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 19 14:25:49.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 delete e2e-test-crd-publish-openapi-8284-crds test-foo' Aug 19 14:25:51.460: INFO: stderr: "" Aug 19 14:25:51.460: INFO: stdout: "e2e-test-crd-publish-openapi-8284-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 19 14:25:51.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 apply -f -' Aug 19 14:25:54.041: INFO: stderr: "" Aug 19 14:25:54.041: INFO: stdout: "e2e-test-crd-publish-openapi-8284-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 19 14:25:54.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 delete e2e-test-crd-publish-openapi-8284-crds test-foo' Aug 19 14:25:55.342: INFO: stderr: "" Aug 19 14:25:55.342: INFO: stdout: "e2e-test-crd-publish-openapi-8284-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 19 14:25:55.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 create -f -' Aug 19 14:25:57.847: INFO: rc: 1 Aug 19 14:25:57.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 apply -f -' Aug 19 14:26:00.463: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 19 14:26:00.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 create -f -' Aug 19 14:26:02.510: INFO: rc: 1 Aug 19 14:26:02.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9611 apply -f -' Aug 19 14:26:05.706: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 19 14:26:05.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8284-crds' Aug 19 14:26:09.094: INFO: stderr: "" Aug 19 14:26:09.095: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 19 14:26:09.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8284-crds.metadata' Aug 19 14:26:12.007: INFO: stderr: "" Aug 19 14:26:12.008: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 19 14:26:12.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8284-crds.spec' Aug 19 14:26:15.005: INFO: stderr: "" Aug 19 14:26:15.005: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 19 14:26:15.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8284-crds.spec.bars' Aug 19 14:26:19.237: INFO: stderr: "" Aug 19 14:26:19.238: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8284-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 19 14:26:19.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8284-crds.spec.bars2' Aug 19 14:26:22.143: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:26:44.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9611" for this suite. • [SLOW TEST:83.724 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":99,"skipped":1478,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:26:44.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60 Aug 19 14:26:44.869: INFO: Pod name my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60: Found 0 pods out of 1 Aug 19 14:26:49.994: INFO: Pod name my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60: Found 1 pods out of 1 Aug 19 14:26:49.994: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60" are running Aug 19 14:26:50.001: INFO: Pod "my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60-jqz4s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 14:26:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 14:26:49 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 14:26:49 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 14:26:44 +0000 UTC Reason: Message:}]) Aug 19 14:26:50.004: INFO: Trying to dial the pod Aug 19 14:26:55.024: INFO: Controller my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60: Got expected result from replica 1 [my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60-jqz4s]: "my-hostname-basic-d5a3ee88-3087-4e32-a427-e262bf1e9c60-jqz4s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:26:55.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4653" for this suite. • [SLOW TEST:10.491 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":100,"skipped":1496,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:26:55.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-crwd5 in namespace proxy-5677 I0819 14:26:55.976615 10 runners.go:190] Created replication controller with name: proxy-service-crwd5, namespace: proxy-5677, replica count: 1 I0819 14:26:57.028026 10 runners.go:190] proxy-service-crwd5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:26:58.028563 10 runners.go:190] proxy-service-crwd5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:26:59.029191 10 runners.go:190] proxy-service-crwd5 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:27:00.030148 10 runners.go:190] proxy-service-crwd5 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0819 14:27:01.030918 10 runners.go:190] proxy-service-crwd5 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 14:27:01.041: INFO: setup took 5.202596363s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 19 14:27:01.050: INFO: (0) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.631539ms) Aug 19 14:27:01.050: INFO: (0) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 7.984561ms) Aug 19 14:27:01.050: INFO: (0) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.252277ms) Aug 19 14:27:01.050: INFO: (0) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 7.950508ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 13.455192ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 13.464593ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 13.992222ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 13.69521ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 14.014965ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 14.131024ms) Aug 19 14:27:01.056: INFO: (0) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 14.381743ms) Aug 19 14:27:01.057: INFO: (0) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 15.277791ms) Aug 19 14:27:01.058: INFO: (0) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test<... (200; 5.120871ms) Aug 19 14:27:01.067: INFO: (1) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 5.536474ms) Aug 19 14:27:01.067: INFO: (1) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 6.029978ms) Aug 19 14:27:01.067: INFO: (1) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 6.288969ms) Aug 19 14:27:01.068: INFO: (1) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 8.142062ms) Aug 19 14:27:01.069: INFO: (1) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 8.19107ms) Aug 19 14:27:01.070: INFO: (1) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 8.322241ms) Aug 19 14:27:01.070: INFO: (1) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 8.568279ms) Aug 19 14:27:01.074: INFO: (2) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 3.707103ms) Aug 19 14:27:01.074: INFO: (2) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 5.869318ms) Aug 19 14:27:01.076: INFO: (2) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 6.101246ms) Aug 19 14:27:01.076: INFO: (2) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 6.383003ms) Aug 19 14:27:01.077: INFO: (2) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 6.705719ms) Aug 19 14:27:01.077: INFO: (2) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 6.67488ms) Aug 19 14:27:01.077: INFO: (2) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.085724ms) Aug 19 14:27:01.078: INFO: (2) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 8.220957ms) Aug 19 14:27:01.078: INFO: (2) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 7.825932ms) Aug 19 14:27:01.078: INFO: (2) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 8.396025ms) Aug 19 14:27:01.078: INFO: (2) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 8.225385ms) Aug 19 14:27:01.083: INFO: (3) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 3.915815ms) Aug 19 14:27:01.083: INFO: (3) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 6.494747ms) Aug 19 14:27:01.085: INFO: (3) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.739122ms) Aug 19 14:27:01.086: INFO: (3) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 6.861685ms) Aug 19 14:27:01.086: INFO: (3) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 6.835106ms) Aug 19 14:27:01.086: INFO: (3) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 7.052909ms) Aug 19 14:27:01.086: INFO: (3) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 7.323673ms) Aug 19 14:27:01.086: INFO: (3) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.551272ms) Aug 19 14:27:01.087: INFO: (3) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 7.577765ms) Aug 19 14:27:01.087: INFO: (3) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.774084ms) Aug 19 14:27:01.091: INFO: (4) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 4.130865ms) Aug 19 14:27:01.091: INFO: (4) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.602035ms) Aug 19 14:27:01.092: INFO: (4) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 5.538649ms) Aug 19 14:27:01.093: INFO: (4) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.498053ms) Aug 19 14:27:01.093: INFO: (4) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 6.103795ms) Aug 19 14:27:01.093: INFO: (4) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 6.093401ms) Aug 19 14:27:01.093: INFO: (4) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 6.479763ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 6.708857ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 6.690849ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.865228ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 7.180864ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 7.561066ms) Aug 19 14:27:01.094: INFO: (4) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.578828ms) Aug 19 14:27:01.099: INFO: (5) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 4.126796ms) Aug 19 14:27:01.099: INFO: (5) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 5.020344ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.008507ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 7.386946ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 7.65847ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.478598ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 7.558765ms) Aug 19 14:27:01.102: INFO: (5) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 7.653466ms) Aug 19 14:27:01.103: INFO: (5) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.701054ms) Aug 19 14:27:01.103: INFO: (5) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 8.062024ms) Aug 19 14:27:01.103: INFO: (5) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 7.906044ms) Aug 19 14:27:01.103: INFO: (5) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 8.265614ms) Aug 19 14:27:01.103: INFO: (5) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 8.276529ms) Aug 19 14:27:01.108: INFO: (6) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.253051ms) Aug 19 14:27:01.108: INFO: (6) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 5.035738ms) Aug 19 14:27:01.109: INFO: (6) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 5.347497ms) Aug 19 14:27:01.109: INFO: (6) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 5.395677ms) Aug 19 14:27:01.109: INFO: (6) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.420783ms) Aug 19 14:27:01.109: INFO: (6) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 5.935223ms) Aug 19 14:27:01.109: INFO: (6) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.83026ms) Aug 19 14:27:01.110: INFO: (6) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 6.243738ms) Aug 19 14:27:01.110: INFO: (6) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 6.790364ms) Aug 19 14:27:01.110: INFO: (6) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 6.769631ms) Aug 19 14:27:01.111: INFO: (6) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 7.225347ms) Aug 19 14:27:01.111: INFO: (6) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 7.075781ms) Aug 19 14:27:01.111: INFO: (6) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 7.262179ms) Aug 19 14:27:01.111: INFO: (6) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.118374ms) Aug 19 14:27:01.111: INFO: (6) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.420351ms) Aug 19 14:27:01.115: INFO: (7) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 3.918917ms) Aug 19 14:27:01.118: INFO: (7) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 6.219414ms) Aug 19 14:27:01.118: INFO: (7) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 6.530453ms) Aug 19 14:27:01.118: INFO: (7) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 6.231423ms) Aug 19 14:27:01.119: INFO: (7) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.144756ms) Aug 19 14:27:01.119: INFO: (7) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 7.004126ms) Aug 19 14:27:01.119: INFO: (7) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.584267ms) Aug 19 14:27:01.119: INFO: (7) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 7.66687ms) Aug 19 14:27:01.119: INFO: (7) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 7.864584ms) Aug 19 14:27:01.120: INFO: (7) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 8.944661ms) Aug 19 14:27:01.120: INFO: (7) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 8.451277ms) Aug 19 14:27:01.120: INFO: (7) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 8.77856ms) Aug 19 14:27:01.121: INFO: (7) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 8.457852ms) Aug 19 14:27:01.120: INFO: (7) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 9.084639ms) Aug 19 14:27:01.124: INFO: (8) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 3.2555ms) Aug 19 14:27:01.124: INFO: (8) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 3.420374ms) Aug 19 14:27:01.125: INFO: (8) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 4.234688ms) Aug 19 14:27:01.127: INFO: (8) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test<... (200; 6.825573ms) Aug 19 14:27:01.128: INFO: (8) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 7.44245ms) Aug 19 14:27:01.128: INFO: (8) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.074369ms) Aug 19 14:27:01.128: INFO: (8) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.550211ms) Aug 19 14:27:01.128: INFO: (8) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 6.978308ms) Aug 19 14:27:01.129: INFO: (8) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 7.615083ms) Aug 19 14:27:01.129: INFO: (8) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 7.35887ms) Aug 19 14:27:01.129: INFO: (8) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 5.400757ms) Aug 19 14:27:01.129: INFO: (8) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.501306ms) Aug 19 14:27:01.133: INFO: (9) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 3.819334ms) Aug 19 14:27:01.134: INFO: (9) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.194713ms) Aug 19 14:27:01.134: INFO: (9) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 5.084791ms) Aug 19 14:27:01.134: INFO: (9) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 4.268518ms) Aug 19 14:27:01.135: INFO: (9) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.443891ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 4.943813ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.458348ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 4.449108ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 4.925233ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 5.897645ms) Aug 19 14:27:01.137: INFO: (9) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.353963ms) Aug 19 14:27:01.138: INFO: (9) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 7.358112ms) Aug 19 14:27:01.138: INFO: (9) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 5.871938ms) Aug 19 14:27:01.138: INFO: (9) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 6.352835ms) Aug 19 14:27:01.138: INFO: (9) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 5.677525ms) Aug 19 14:27:01.142: INFO: (10) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 3.163094ms) Aug 19 14:27:01.142: INFO: (10) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 3.843358ms) Aug 19 14:27:01.143: INFO: (10) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.002219ms) Aug 19 14:27:01.143: INFO: (10) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 3.904968ms) Aug 19 14:27:01.146: INFO: (10) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 7.30427ms) Aug 19 14:27:01.146: INFO: (10) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 7.264331ms) Aug 19 14:27:01.147: INFO: (10) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 7.812024ms) Aug 19 14:27:01.147: INFO: (10) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 8.001566ms) Aug 19 14:27:01.147: INFO: (10) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 8.390709ms) Aug 19 14:27:01.147: INFO: (10) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 8.541009ms) Aug 19 14:27:01.148: INFO: (10) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 4.592263ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 5.28296ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 5.062567ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 5.098695ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.435144ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 5.384457ms) Aug 19 14:27:01.154: INFO: (11) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 3.524956ms) Aug 19 14:27:01.160: INFO: (12) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.006526ms) Aug 19 14:27:01.160: INFO: (12) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 4.16786ms) Aug 19 14:27:01.160: INFO: (12) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.196836ms) Aug 19 14:27:01.161: INFO: (12) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 4.735842ms) Aug 19 14:27:01.161: INFO: (12) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 4.884061ms) Aug 19 14:27:01.161: INFO: (12) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 4.983362ms) Aug 19 14:27:01.161: INFO: (12) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 5.189384ms) Aug 19 14:27:01.162: INFO: (12) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 6.073808ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 6.480785ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 6.590477ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 6.909201ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.98246ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 7.034914ms) Aug 19 14:27:01.163: INFO: (12) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test<... (200; 3.569388ms) Aug 19 14:27:01.167: INFO: (13) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 3.646706ms) Aug 19 14:27:01.167: INFO: (13) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 3.73049ms) Aug 19 14:27:01.173: INFO: (13) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 9.383729ms) Aug 19 14:27:01.173: INFO: (13) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 9.43018ms) Aug 19 14:27:01.173: INFO: (13) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 9.584821ms) Aug 19 14:27:01.173: INFO: (13) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 9.967905ms) Aug 19 14:27:01.173: INFO: (13) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 4.62984ms) Aug 19 14:27:01.180: INFO: (14) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.170619ms) Aug 19 14:27:01.180: INFO: (14) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 5.321669ms) Aug 19 14:27:01.180: INFO: (14) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.446346ms) Aug 19 14:27:01.180: INFO: (14) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 5.362251ms) Aug 19 14:27:01.181: INFO: (14) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 6.3447ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 6.419729ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 7.109874ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 7.26039ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 7.122877ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 7.30197ms) Aug 19 14:27:01.182: INFO: (14) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 7.457037ms) Aug 19 14:27:01.183: INFO: (14) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 7.689158ms) Aug 19 14:27:01.183: INFO: (14) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 7.497878ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.03972ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 4.534199ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 5.143337ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.027162ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 5.125383ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.043642ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 5.092176ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 4.778013ms) Aug 19 14:27:01.188: INFO: (15) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.283597ms) Aug 19 14:27:01.189: INFO: (15) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 5.607135ms) Aug 19 14:27:01.189: INFO: (15) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 5.859784ms) Aug 19 14:27:01.189: INFO: (15) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 6.35528ms) Aug 19 14:27:01.190: INFO: (15) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.432907ms) Aug 19 14:27:01.190: INFO: (15) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 6.669291ms) Aug 19 14:27:01.193: INFO: (16) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 3.152249ms) Aug 19 14:27:01.194: INFO: (16) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.155179ms) Aug 19 14:27:01.194: INFO: (16) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.125375ms) Aug 19 14:27:01.195: INFO: (16) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.202648ms) Aug 19 14:27:01.195: INFO: (16) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 4.53392ms) Aug 19 14:27:01.195: INFO: (16) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 4.889807ms) Aug 19 14:27:01.195: INFO: (16) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 5.319333ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 5.408693ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 5.362465ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.399655ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 6.034346ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 5.558787ms) Aug 19 14:27:01.196: INFO: (16) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 5.810129ms) Aug 19 14:27:01.197: INFO: (16) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 6.940503ms) Aug 19 14:27:01.200: INFO: (17) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 3.313369ms) Aug 19 14:27:01.200: INFO: (17) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 3.304289ms) Aug 19 14:27:01.201: INFO: (17) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 4.250726ms) Aug 19 14:27:01.202: INFO: (17) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 4.421129ms) Aug 19 14:27:01.202: INFO: (17) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 4.7138ms) Aug 19 14:27:01.202: INFO: (17) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 4.629709ms) Aug 19 14:27:01.202: INFO: (17) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 5.104782ms) Aug 19 14:27:01.203: INFO: (17) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 5.734522ms) Aug 19 14:27:01.203: INFO: (17) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 6.149298ms) Aug 19 14:27:01.203: INFO: (17) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: ... (200; 6.16584ms) Aug 19 14:27:01.204: INFO: (17) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 6.369495ms) Aug 19 14:27:01.204: INFO: (17) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 6.35504ms) Aug 19 14:27:01.207: INFO: (18) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 3.074144ms) Aug 19 14:27:01.207: INFO: (18) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 3.262197ms) Aug 19 14:27:01.208: INFO: (18) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 3.85863ms) Aug 19 14:27:01.208: INFO: (18) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.142881ms) Aug 19 14:27:01.208: INFO: (18) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname2/proxy/: tls qux (200; 4.583313ms) Aug 19 14:27:01.209: INFO: (18) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: test (200; 5.18461ms) Aug 19 14:27:01.209: INFO: (18) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.16396ms) Aug 19 14:27:01.210: INFO: (18) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 5.718371ms) Aug 19 14:27:01.210: INFO: (18) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname1/proxy/: foo (200; 5.981784ms) Aug 19 14:27:01.210: INFO: (18) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname1/proxy/: foo (200; 6.059712ms) Aug 19 14:27:01.211: INFO: (18) /api/v1/namespaces/proxy-5677/services/http:proxy-service-crwd5:portname2/proxy/: bar (200; 7.251219ms) Aug 19 14:27:01.211: INFO: (18) /api/v1/namespaces/proxy-5677/services/https:proxy-service-crwd5:tlsportname1/proxy/: tls baz (200; 7.178938ms) Aug 19 14:27:01.215: INFO: (19) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2/proxy/: test (200; 3.446098ms) Aug 19 14:27:01.216: INFO: (19) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:1080/proxy/: ... (200; 4.464942ms) Aug 19 14:27:01.216: INFO: (19) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:460/proxy/: tls baz (200; 4.515979ms) Aug 19 14:27:01.216: INFO: (19) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:162/proxy/: bar (200; 4.698871ms) Aug 19 14:27:01.216: INFO: (19) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:1080/proxy/: test<... (200; 4.589373ms) Aug 19 14:27:01.217: INFO: (19) /api/v1/namespaces/proxy-5677/pods/http:proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.044053ms) Aug 19 14:27:01.217: INFO: (19) /api/v1/namespaces/proxy-5677/services/proxy-service-crwd5:portname2/proxy/: bar (200; 5.202837ms) Aug 19 14:27:01.217: INFO: (19) /api/v1/namespaces/proxy-5677/pods/proxy-service-crwd5-c5tj2:160/proxy/: foo (200; 5.445712ms) Aug 19 14:27:01.218: INFO: (19) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:462/proxy/: tls qux (200; 6.16418ms) Aug 19 14:27:01.218: INFO: (19) /api/v1/namespaces/proxy-5677/pods/https:proxy-service-crwd5-c5tj2:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 19 14:27:13.747: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:27:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4390" for this suite. • [SLOW TEST:8.349 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":102,"skipped":1518,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:27:13.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 19 14:27:15.563: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:27:33.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5185" for this suite. • [SLOW TEST:19.570 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":103,"skipped":1531,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:27:33.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 19 14:27:34.264: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:27:35.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":104,"skipped":1538,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:27:35.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:27:38.353: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:27:40.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:27:42.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444058, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:27:45.447: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:27:45.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3110" for this suite. STEP: Destroying namespace "webhook-3110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.161 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":105,"skipped":1546,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:27:45.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:27:50.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:27:52.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444071, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444069, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:27:54.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444071, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444069, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:27:56.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444070, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444071, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444069, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:27:59.544: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:27:59.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4995-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:28:00.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-545" for this suite. STEP: Destroying namespace "webhook-545-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.953 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":106,"skipped":1549,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:28:00.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:28:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7940" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":107,"skipped":1555,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:28:01.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-m7x9 STEP: Creating a pod to test atomic-volume-subpath Aug 19 14:28:01.863: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m7x9" in namespace "subpath-6343" to be "Succeeded or Failed" Aug 19 14:28:01.910: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 46.484017ms Aug 19 14:28:03.918: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054673792s Aug 19 14:28:05.934: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070744562s Aug 19 14:28:07.941: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 6.078271861s Aug 19 14:28:09.949: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 8.085326181s Aug 19 14:28:11.955: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 10.091551327s Aug 19 14:28:13.962: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 12.098592703s Aug 19 14:28:15.969: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 14.10606252s Aug 19 14:28:17.977: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 16.114036722s Aug 19 14:28:19.984: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 18.120626326s Aug 19 14:28:22.186: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 20.323119893s Aug 19 14:28:24.516: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 22.652883781s Aug 19 14:28:26.523: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Running", Reason="", readiness=true. Elapsed: 24.659576321s Aug 19 14:28:28.530: INFO: Pod "pod-subpath-test-configmap-m7x9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.666422711s STEP: Saw pod success Aug 19 14:28:28.530: INFO: Pod "pod-subpath-test-configmap-m7x9" satisfied condition "Succeeded or Failed" Aug 19 14:28:28.534: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-m7x9 container test-container-subpath-configmap-m7x9: STEP: delete the pod Aug 19 14:28:28.768: INFO: Waiting for pod pod-subpath-test-configmap-m7x9 to disappear Aug 19 14:28:28.798: INFO: Pod pod-subpath-test-configmap-m7x9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-m7x9 Aug 19 14:28:28.798: INFO: Deleting pod "pod-subpath-test-configmap-m7x9" in namespace "subpath-6343" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:28:28.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6343" for this suite. • [SLOW TEST:27.146 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":108,"skipped":1562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:28:28.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 19 14:28:29.910: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:28:45.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3741" for this suite. • [SLOW TEST:17.075 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":109,"skipped":1585,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:28:45.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 19 14:28:50.751: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1977 pod-service-account-14e92e87-d024-4e4c-bea7-ab6e83fa5e1d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 19 14:28:52.603: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1977 pod-service-account-14e92e87-d024-4e4c-bea7-ab6e83fa5e1d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 19 14:28:54.537: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1977 pod-service-account-14e92e87-d024-4e4c-bea7-ab6e83fa5e1d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:28:56.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1977" for this suite. • [SLOW TEST:10.274 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":110,"skipped":1594,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:28:56.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-52806972-b6c1-4ccf-a328-a9695a023f3a STEP: Creating secret with name s-test-opt-upd-9a58e7f2-5879-41fb-85a6-716071cb3902 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-52806972-b6c1-4ccf-a328-a9695a023f3a STEP: Updating secret s-test-opt-upd-9a58e7f2-5879-41fb-85a6-716071cb3902 STEP: Creating secret with name s-test-opt-create-d7a5980e-3a23-42d2-887f-dddfc8a2f42d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:29:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-365" for this suite. • [SLOW TEST:12.484 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1612,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:29:08.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 14:29:12.858: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 14:29:15.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 14:29:17.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733444152, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 14:29:20.385: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:29:20.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4502" for this suite. STEP: Destroying namespace "webhook-4502-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.997 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":112,"skipped":1625,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:29:20.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 19 14:29:25.814: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:29:25.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-363" for this suite. • [SLOW TEST:5.268 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":113,"skipped":1645,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:29:25.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-247f4fd1-8693-4c34-9509-dbcbc16b8fc4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-247f4fd1-8693-4c34-9509-dbcbc16b8fc4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:29:33.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1142" for this suite. • [SLOW TEST:7.593 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1650,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:29:33.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:29:33.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5386" for this suite. STEP: Destroying namespace "nspatchtest-08998f47-a192-40f9-8111-29aabd485b58-8524" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":115,"skipped":1658,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:29:33.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0819 14:29:47.189310 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 14:30:49.248: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 19 14:30:49.248: INFO: Deleting pod "simpletest-rc-to-be-deleted-8cgf4" in namespace "gc-6275" Aug 19 14:30:49.300: INFO: Deleting pod "simpletest-rc-to-be-deleted-97xms" in namespace "gc-6275" Aug 19 14:30:49.371: INFO: Deleting pod "simpletest-rc-to-be-deleted-9dgf7" in namespace "gc-6275" Aug 19 14:30:49.705: INFO: Deleting pod "simpletest-rc-to-be-deleted-bnbkf" in namespace "gc-6275" Aug 19 14:30:49.976: INFO: Deleting pod "simpletest-rc-to-be-deleted-ccx7m" in namespace "gc-6275" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:30:50.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6275" for this suite. • [SLOW TEST:76.759 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":116,"skipped":1659,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:30:50.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2287 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 19 14:30:51.177: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 19 14:30:51.645: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:30:54.033: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:30:55.656: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 14:30:57.741: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:30:59.708: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:01.673: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:03.651: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:05.652: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:07.653: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:09.732: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:11.653: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 14:31:13.651: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 19 14:31:13.661: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 14:31:15.667: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 19 14:31:21.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.12:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2287 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:31:21.793: INFO: >>> kubeConfig: /root/.kube/config I0819 14:31:21.845585 10 log.go:181] (0x40057300b0) (0x40036b59a0) Create stream I0819 14:31:21.845774 10 log.go:181] (0x40057300b0) (0x40036b59a0) Stream added, broadcasting: 1 I0819 14:31:21.850721 10 log.go:181] (0x40057300b0) Reply frame received for 1 I0819 14:31:21.850898 10 log.go:181] (0x40057300b0) (0x4001ead2c0) Create stream I0819 14:31:21.850975 10 log.go:181] (0x40057300b0) (0x4001ead2c0) Stream added, broadcasting: 3 I0819 14:31:21.852369 10 log.go:181] (0x40057300b0) Reply frame received for 3 I0819 14:31:21.852507 10 log.go:181] (0x40057300b0) (0x40036b5a40) Create stream I0819 14:31:21.852573 10 log.go:181] (0x40057300b0) (0x40036b5a40) Stream added, broadcasting: 5 I0819 14:31:21.853854 10 log.go:181] (0x40057300b0) Reply frame received for 5 I0819 14:31:21.923893 10 log.go:181] (0x40057300b0) Data frame received for 5 I0819 14:31:21.924035 10 log.go:181] (0x40036b5a40) (5) Data frame handling I0819 14:31:21.924232 10 log.go:181] (0x40057300b0) Data frame received for 3 I0819 14:31:21.924376 10 log.go:181] (0x4001ead2c0) (3) Data frame handling I0819 14:31:21.924512 10 log.go:181] (0x4001ead2c0) (3) Data frame sent I0819 14:31:21.924644 10 log.go:181] (0x40057300b0) Data frame received for 3 I0819 14:31:21.924863 10 log.go:181] (0x4001ead2c0) (3) Data frame handling I0819 14:31:21.925474 10 log.go:181] (0x40057300b0) Data frame received for 1 I0819 14:31:21.925613 10 log.go:181] (0x40036b59a0) (1) Data frame handling I0819 14:31:21.925727 10 log.go:181] (0x40036b59a0) (1) Data frame sent I0819 14:31:21.925866 10 log.go:181] (0x40057300b0) (0x40036b59a0) Stream removed, broadcasting: 1 I0819 14:31:21.926044 10 log.go:181] (0x40057300b0) Go away received I0819 14:31:21.926228 10 log.go:181] (0x40057300b0) (0x40036b59a0) Stream removed, broadcasting: 1 I0819 14:31:21.926441 10 log.go:181] (0x40057300b0) (0x4001ead2c0) Stream removed, broadcasting: 3 I0819 14:31:21.926549 10 log.go:181] (0x40057300b0) (0x40036b5a40) Stream removed, broadcasting: 5 Aug 19 14:31:21.926: INFO: Found all expected endpoints: [netserver-0] Aug 19 14:31:21.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.10:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2287 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:31:21.932: INFO: >>> kubeConfig: /root/.kube/config I0819 14:31:21.996389 10 log.go:181] (0x40057304d0) (0x40036b5c20) Create stream I0819 14:31:21.996633 10 log.go:181] (0x40057304d0) (0x40036b5c20) Stream added, broadcasting: 1 I0819 14:31:22.001553 10 log.go:181] (0x40057304d0) Reply frame received for 1 I0819 14:31:22.001710 10 log.go:181] (0x40057304d0) (0x40039b2e60) Create stream I0819 14:31:22.001789 10 log.go:181] (0x40057304d0) (0x40039b2e60) Stream added, broadcasting: 3 I0819 14:31:22.003091 10 log.go:181] (0x40057304d0) Reply frame received for 3 I0819 14:31:22.003226 10 log.go:181] (0x40057304d0) (0x4001ead360) Create stream I0819 14:31:22.003295 10 log.go:181] (0x40057304d0) (0x4001ead360) Stream added, broadcasting: 5 I0819 14:31:22.004629 10 log.go:181] (0x40057304d0) Reply frame received for 5 I0819 14:31:22.079248 10 log.go:181] (0x40057304d0) Data frame received for 3 I0819 14:31:22.079466 10 log.go:181] (0x40039b2e60) (3) Data frame handling I0819 14:31:22.079643 10 log.go:181] (0x40057304d0) Data frame received for 5 I0819 14:31:22.079876 10 log.go:181] (0x4001ead360) (5) Data frame handling I0819 14:31:22.080091 10 log.go:181] (0x40039b2e60) (3) Data frame sent I0819 14:31:22.080276 10 log.go:181] (0x40057304d0) Data frame received for 3 I0819 14:31:22.080445 10 log.go:181] (0x40039b2e60) (3) Data frame handling I0819 14:31:22.080897 10 log.go:181] (0x40057304d0) Data frame received for 1 I0819 14:31:22.081041 10 log.go:181] (0x40036b5c20) (1) Data frame handling I0819 14:31:22.081183 10 log.go:181] (0x40036b5c20) (1) Data frame sent I0819 14:31:22.081322 10 log.go:181] (0x40057304d0) (0x40036b5c20) Stream removed, broadcasting: 1 I0819 14:31:22.081489 10 log.go:181] (0x40057304d0) Go away received I0819 14:31:22.081996 10 log.go:181] (0x40057304d0) (0x40036b5c20) Stream removed, broadcasting: 1 I0819 14:31:22.082159 10 log.go:181] (0x40057304d0) (0x40039b2e60) Stream removed, broadcasting: 3 I0819 14:31:22.082313 10 log.go:181] (0x40057304d0) (0x4001ead360) Stream removed, broadcasting: 5 Aug 19 14:31:22.082: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:31:22.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2287" for this suite. • [SLOW TEST:31.572 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":1667,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:31:22.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:31:26.304: INFO: Waiting up to 5m0s for pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633" in namespace "pods-8025" to be "Succeeded or Failed" Aug 19 14:31:26.313: INFO: Pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5228ms Aug 19 14:31:28.433: INFO: Pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128983798s Aug 19 14:31:30.440: INFO: Pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13536469s Aug 19 14:31:32.448: INFO: Pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143583945s STEP: Saw pod success Aug 19 14:31:32.448: INFO: Pod "client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633" satisfied condition "Succeeded or Failed" Aug 19 14:31:32.469: INFO: Trying to get logs from node latest-worker2 pod client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633 container env3cont: STEP: delete the pod Aug 19 14:31:32.507: INFO: Waiting for pod client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633 to disappear Aug 19 14:31:32.537: INFO: Pod client-envvars-62aa8d02-4dae-4b97-a1b1-f1ad78909633 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:31:32.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8025" for this suite. • [SLOW TEST:10.454 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:31:32.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 19 14:31:39.213: INFO: Successfully updated pod "pod-update-dd4addb7-a5e0-4250-8d5c-2a17b146a01a" STEP: verifying the updated pod is in kubernetes Aug 19 14:31:39.286: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:31:39.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6126" for this suite. • [SLOW TEST:6.748 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":1743,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:31:39.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 19 14:31:39.511: INFO: Waiting up to 5m0s for pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40" in namespace "emptydir-4866" to be "Succeeded or Failed" Aug 19 14:31:39.539: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 28.147787ms Aug 19 14:31:41.544: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033246736s Aug 19 14:31:43.550: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038942135s Aug 19 14:31:45.557: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045624056s Aug 19 14:31:47.564: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052624645s STEP: Saw pod success Aug 19 14:31:47.564: INFO: Pod "pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40" satisfied condition "Succeeded or Failed" Aug 19 14:31:47.570: INFO: Trying to get logs from node latest-worker pod pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40 container test-container: STEP: delete the pod Aug 19 14:31:47.612: INFO: Waiting for pod pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40 to disappear Aug 19 14:31:47.617: INFO: Pod pod-2500086a-faac-47e0-8d04-7b3a2c6f2a40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:31:47.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4866" for this suite. • [SLOW TEST:8.323 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:31:47.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-e1e33c9c-08de-4ae9-869b-e9ddf27062b7 in namespace container-probe-1412 Aug 19 14:31:51.759: INFO: Started pod busybox-e1e33c9c-08de-4ae9-869b-e9ddf27062b7 in namespace container-probe-1412 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 14:31:51.764: INFO: Initial restart count of pod busybox-e1e33c9c-08de-4ae9-869b-e9ddf27062b7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:35:52.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1412" for this suite. • [SLOW TEST:245.781 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":1773,"failed":0} [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:35:53.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:35:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4606" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":122,"skipped":1773,"failed":0} SSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:35:55.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Aug 19 14:35:55.803: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:35:56.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5458" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":123,"skipped":1781,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:35:56.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:35:56.669: INFO: Creating deployment "webserver-deployment" Aug 19 14:35:56.683: INFO: Waiting for observed generation 1 Aug 19 14:35:59.155: INFO: Waiting for all required pods to come up Aug 19 14:35:59.416: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 19 14:36:15.553: INFO: Waiting for deployment "webserver-deployment" to complete Aug 19 14:36:15.565: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 19 14:36:15.579: INFO: Updating deployment webserver-deployment Aug 19 14:36:15.579: INFO: Waiting for observed generation 2 Aug 19 14:36:18.076: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 19 14:36:18.254: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 19 14:36:18.307: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 19 14:36:18.473: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 19 14:36:18.473: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 19 14:36:18.477: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 19 14:36:18.485: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 19 14:36:18.485: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 19 14:36:18.496: INFO: Updating deployment webserver-deployment Aug 19 14:36:18.496: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 19 14:36:19.181: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 19 14:36:19.482: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 19 14:36:22.304: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1209 /apis/apps/v1/namespaces/deployment-1209/deployments/webserver-deployment 9b91467c-d214-4027-9bb3-91c4af568720 1513582 3 2020-08-19 14:35:56 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-19 14:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004bec858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-19 14:36:19 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-19 14:36:19 +0000 UTC,LastTransitionTime:2020-08-19 14:35:56 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 19 14:36:22.571: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-1209 /apis/apps/v1/namespaces/deployment-1209/replicasets/webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 1513573 3 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 9b91467c-d214-4027-9bb3-91c4af568720 0x4004beccb7 0x4004beccb8}] [] [{kube-controller-manager Update apps/v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b91467c-d214-4027-9bb3-91c4af568720\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004becd48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 14:36:22.571: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 19 14:36:22.572: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-1209 /apis/apps/v1/namespaces/deployment-1209/replicasets/webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 1513559 3 2020-08-19 14:35:56 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 9b91467c-d214-4027-9bb3-91c4af568720 0x4004becda7 0x4004becda8}] [] [{kube-controller-manager Update apps/v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b91467c-d214-4027-9bb3-91c4af568720\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004bece18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 19 14:36:22.854: INFO: Pod "webserver-deployment-795d758f88-64qq2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-64qq2 webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-64qq2 3eaf78b4-4d4c-4d28-90a0-8959dc11648c 1513597 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc037 0x4001ebc038}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.856: INFO: Pod "webserver-deployment-795d758f88-cj7zc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cj7zc webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-cj7zc 9f37425d-6efa-4307-be48-fb5ade10eeee 1513599 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc1e7 0x4001ebc1e8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.857: INFO: Pod "webserver-deployment-795d758f88-mcz52" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mcz52 webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-mcz52 2adc8a4e-6ede-4494-b9bb-608a105ff79e 1513482 0 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc397 0x4001ebc398}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.859: INFO: Pod "webserver-deployment-795d758f88-mjpsq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mjpsq webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-mjpsq b677cd32-5da6-4900-8fba-9512a99a1026 1513580 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc547 0x4001ebc548}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.860: INFO: Pod "webserver-deployment-795d758f88-n5pcf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n5pcf webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-n5pcf 84187629-6ede-4dcf-9ee1-240d83e6861e 1513607 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc6f7 0x4001ebc6f8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.861: INFO: Pod "webserver-deployment-795d758f88-n6gr7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n6gr7 webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-n6gr7 15ff5d95-ab7b-4775-b3a7-998ab1e48161 1513623 0 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebc8b7 0x4001ebc8b8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.21,StartTime:2020-08-19 14:36:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.863: INFO: Pod "webserver-deployment-795d758f88-ng6cs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ng6cs webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-ng6cs 5d261ded-b5a8-4411-9235-497ad392e245 1513621 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebced7 0x4001ebced8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.864: INFO: Pod "webserver-deployment-795d758f88-q56sb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-q56sb webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-q56sb 69d7cd9c-3a58-4322-a346-123dd16dcea3 1513605 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd087 0x4001ebd088}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.865: INFO: Pod "webserver-deployment-795d758f88-rfv7s" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rfv7s webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-rfv7s c4739123-a996-4ac6-9672-ba2748412be6 1513560 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd237 0x4001ebd238}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.866: INFO: Pod "webserver-deployment-795d758f88-t6hvd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t6hvd webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-t6hvd 6791b14e-080f-48ea-8d3d-4a14602c4dbc 1513614 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd3e7 0x4001ebd3e8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.867: INFO: Pod "webserver-deployment-795d758f88-x6npx" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x6npx webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-x6npx 10c2a3d6-e97f-4950-b190-0ad182b08827 1513472 0 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd597 0x4001ebd598}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.868: INFO: Pod "webserver-deployment-795d758f88-zhsmr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zhsmr webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-zhsmr 82e160a2-f135-4619-a5e7-22b5d1b9eb31 1513457 0 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd747 0x4001ebd748}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.869: INFO: Pod "webserver-deployment-795d758f88-zjkzn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zjkzn webserver-deployment-795d758f88- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-795d758f88-zjkzn 8d51c509-bf69-441b-9b47-6c646f487a6e 1513480 0 2020-08-19 14:36:15 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 9abda1c0-80a0-4af6-9d5a-46c4bb058d4a 0x4001ebd917 0x4001ebd918}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9abda1c0-80a0-4af6-9d5a-46c4bb058d4a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.871: INFO: Pod "webserver-deployment-dd94f59b7-4ftmm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4ftmm webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-4ftmm 8bc8233a-469a-4107-99b5-d7313db5a58e 1513375 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4001ebdf77 0x4001ebdf78}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.15,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4a4b4e4adc0beecb4b9a7fdca5d9bcf38675dacef945fec582268442e8cb68cb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.872: INFO: Pod "webserver-deployment-dd94f59b7-68rqs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-68rqs webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-68rqs f548a240-7c9b-4846-b307-e0fd1646d3cf 1513595 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a761c7 0x4003a761c8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.873: INFO: Pod "webserver-deployment-dd94f59b7-72cg4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-72cg4 webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-72cg4 e02e5da9-2e20-4160-8940-c2b065fc8ddd 1513611 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a76477 0x4003a76478}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.874: INFO: Pod "webserver-deployment-dd94f59b7-7c778" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7c778 webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-7c778 72d46d27-bb8b-4018-bcd3-496d59c72eda 1513603 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a766e7 0x4003a766e8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.876: INFO: Pod "webserver-deployment-dd94f59b7-7rjhv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7rjhv webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-7rjhv fa15952a-6926-459b-9af8-53f8641aa805 1513584 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a769f7 0x4003a769f8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.877: INFO: Pod "webserver-deployment-dd94f59b7-86w6c" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-86w6c webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-86w6c 1cd1a1a8-e58b-4c8b-a984-9e4dd5846534 1513578 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a76b87 0x4003a76b88}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.878: INFO: Pod "webserver-deployment-dd94f59b7-8gwk5" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8gwk5 webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-8gwk5 b50bcb0d-0a6c-4cec-8e70-6cfdb7138e87 1513412 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a76d37 0x4003a76d38}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.18,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://397db8240fe289e49f948a771fb583c887a6e01e33548b362d4c52a80bca26c3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.879: INFO: Pod "webserver-deployment-dd94f59b7-9fh4x" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9fh4x webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-9fh4x 0cfd98d7-0788-46b8-b72c-dde8c9f84767 1513591 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a76ef7 0x4003a76ef8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.880: INFO: Pod "webserver-deployment-dd94f59b7-9fr5r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9fr5r webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-9fr5r 90fce801-65ae-4ae2-a91e-bdc0bf1fdab9 1513530 0 2020-08-19 14:36:18 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a772e7 0x4003a772e8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.881: INFO: Pod "webserver-deployment-dd94f59b7-bhxnt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bhxnt webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-bhxnt 4dc251bb-557d-4259-90e5-b78a69c0c43a 1513372 0 2020-08-19 14:35:56 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a77927 0x4003a77928}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.16,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d5bd548e5b315db395987429aab1a622687477e339bbef43347faf1c01c95e9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.882: INFO: Pod "webserver-deployment-dd94f59b7-dph7n" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dph7n webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-dph7n c7d2d253-8afe-4ec9-a6d6-25c6ac25a9ae 1513419 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x4003a77e57 0x4003a77e58}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.20,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf4be0ad90935e763cb6f4155623c5fb248368b2847d01cfff14007ae3e1c767,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.883: INFO: Pod "webserver-deployment-dd94f59b7-dtgpk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dtgpk webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-dtgpk 8063b64e-7757-479b-844a-8c27b778d0de 1513385 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c007 0x400332c008}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.16,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4c8f05546f083821e986dadc106fb6edb7d23b6ffdd6db2468e7ce4117878a80,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.884: INFO: Pod "webserver-deployment-dd94f59b7-dzhpg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dzhpg webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-dzhpg 3aa13ca3-4968-49ba-9223-f6fbe4a16ec5 1513586 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c1b7 0x400332c1b8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.885: INFO: Pod "webserver-deployment-dd94f59b7-fdx97" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fdx97 webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-fdx97 cb30106a-c60d-4e5e-9a1f-9d30e455504e 1513571 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c347 0x400332c348}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.887: INFO: Pod "webserver-deployment-dd94f59b7-gjvxc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gjvxc webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-gjvxc 2a5bc6f6-ccce-4f16-930b-a9b159359b39 1513618 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c4d7 0x400332c4d8}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.888: INFO: Pod "webserver-deployment-dd94f59b7-h5z2f" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-h5z2f webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-h5z2f 17354d10-8d1a-42d9-adfc-efc7f2b206cf 1513414 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c667 0x400332c668}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.19,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a89aa28774dc5496d517cd7e78c42b531dc6b06f0e2dc077b13afcc9bfc7276e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.889: INFO: Pod "webserver-deployment-dd94f59b7-ktcxj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ktcxj webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-ktcxj 49082a9e-09ad-48fe-bd11-2ef217eff2b6 1513617 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c817 0x400332c818}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.891: INFO: Pod "webserver-deployment-dd94f59b7-rjztd" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rjztd webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-rjztd 0fda7827-5ec2-48e9-8a0f-66b607650774 1513364 0 2020-08-19 14:35:57 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332c9a7 0x400332c9a8}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.17,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://392a3d8312ef49034a3deb9bdd69b1489b8021ba499ecbfccbb4eaac3f44da6d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.892: INFO: Pod "webserver-deployment-dd94f59b7-xggvb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xggvb webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-xggvb 0ed81465-dec2-4760-95bd-81efd353a435 1513363 0 2020-08-19 14:35:56 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332cb57 0x400332cb58}] [] [{kube-controller-manager Update v1 2020-08-19 14:35:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:35:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.14,StartTime:2020-08-19 14:35:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 14:36:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ac7e59ac64296ae485c3ccf45d570b9055c8ed54855d79ca878ba331fbf2e49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 14:36:22.893: INFO: Pod "webserver-deployment-dd94f59b7-z745k" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z745k webserver-deployment-dd94f59b7- deployment-1209 /api/v1/namespaces/deployment-1209/pods/webserver-deployment-dd94f59b7-z745k f35bceb5-6d04-4a61-859e-a0a2a4a3375d 1513593 0 2020-08-19 14:36:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 07667d9e-62e2-462c-9b91-c84b3f647bca 0x400332cd07 0x400332cd08}] [] [{kube-controller-manager Update v1 2020-08-19 14:36:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07667d9e-62e2-462c-9b91-c84b3f647bca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 14:36:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gdd5w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gdd5w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gdd5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 14:36:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-19 14:36:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:36:22.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1209" for this suite. • [SLOW TEST:26.944 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":124,"skipped":1792,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:36:23.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8247.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8247.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8247.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8247.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.180.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.180.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.180.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.180.117_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8247.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8247.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8247.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8247.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8247.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8247.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.180.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.180.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.180.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.180.117_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 14:36:55.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:55.810: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:56.623: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:57.567: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:58.735: INFO: Unable to read jessie_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:59.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:59.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:59.568: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:36:59.931: INFO: Lookups using dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff failed for: [wheezy_udp@dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_udp@dns-test-service.dns-8247.svc.cluster.local jessie_tcp@dns-test-service.dns-8247.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local] Aug 19 14:37:05.251: INFO: Unable to read wheezy_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:05.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:05.466: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:05.477: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:06.546: INFO: Unable to read jessie_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:06.550: INFO: Unable to read jessie_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:06.554: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:06.722: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:07.871: INFO: Lookups using dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff failed for: [wheezy_udp@dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_udp@dns-test-service.dns-8247.svc.cluster.local jessie_tcp@dns-test-service.dns-8247.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local] Aug 19 14:37:10.362: INFO: Unable to read wheezy_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:10.915: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:12.620: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:13.794: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:15.425: INFO: Unable to read jessie_udp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:15.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:15.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:15.745: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local from pod dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff: the server could not find the requested resource (get pods dns-test-3a849e9f-66cc-4937-828a-b409441216ff) Aug 19 14:37:15.765: INFO: Lookups using dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff failed for: [wheezy_udp@dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@dns-test-service.dns-8247.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_udp@dns-test-service.dns-8247.svc.cluster.local jessie_tcp@dns-test-service.dns-8247.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8247.svc.cluster.local] Aug 19 14:37:21.469: INFO: DNS probes using dns-8247/dns-test-3a849e9f-66cc-4937-828a-b409441216ff succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:37:24.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8247" for this suite. • [SLOW TEST:61.986 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":125,"skipped":1795,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:37:25.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 19 14:37:26.339: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 19 14:37:31.563: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:37:32.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2441" for this suite. • [SLOW TEST:7.802 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":126,"skipped":1798,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:37:32.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Aug 19 14:37:34.648: INFO: created test-event-1 Aug 19 14:37:34.849: INFO: created test-event-2 Aug 19 14:37:35.195: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Aug 19 14:37:35.234: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Aug 19 14:37:36.202: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:37:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7282" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":127,"skipped":1862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:37:36.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8904 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8904 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8904 Aug 19 14:37:40.228: INFO: Found 0 stateful pods, waiting for 1 Aug 19 14:37:50.256: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 19 14:38:00.817: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 19 14:38:00.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:38:23.762: INFO: stderr: "I0819 14:38:23.542207 1251 log.go:181] (0x400003a0b0) (0x4000f8c0a0) Create stream\nI0819 14:38:23.546245 1251 log.go:181] (0x400003a0b0) (0x4000f8c0a0) Stream added, broadcasting: 1\nI0819 14:38:23.558138 1251 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0819 14:38:23.558994 1251 log.go:181] (0x400003a0b0) (0x4000e080a0) Create stream\nI0819 14:38:23.559111 1251 log.go:181] (0x400003a0b0) (0x4000e080a0) Stream added, broadcasting: 3\nI0819 14:38:23.561023 1251 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0819 14:38:23.561377 1251 log.go:181] (0x400003a0b0) (0x40000ba500) Create stream\nI0819 14:38:23.561464 1251 log.go:181] (0x400003a0b0) (0x40000ba500) Stream added, broadcasting: 5\nI0819 14:38:23.562782 1251 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0819 14:38:23.629903 1251 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:38:23.630108 1251 log.go:181] (0x40000ba500) (5) Data frame handling\nI0819 14:38:23.630506 1251 log.go:181] (0x40000ba500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:38:23.741288 1251 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:38:23.741478 1251 log.go:181] (0x4000e080a0) (3) Data frame handling\nI0819 14:38:23.741582 1251 log.go:181] (0x4000e080a0) (3) Data frame sent\nI0819 14:38:23.741660 1251 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:38:23.741760 1251 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:38:23.741861 1251 log.go:181] (0x40000ba500) (5) Data frame handling\nI0819 14:38:23.741960 1251 log.go:181] (0x4000e080a0) (3) Data frame handling\nI0819 14:38:23.743087 1251 log.go:181] (0x400003a0b0) Data frame received for 1\nI0819 14:38:23.743187 1251 log.go:181] (0x4000f8c0a0) (1) Data frame handling\nI0819 14:38:23.743288 1251 log.go:181] (0x4000f8c0a0) (1) Data frame sent\nI0819 14:38:23.745123 1251 log.go:181] (0x400003a0b0) (0x4000f8c0a0) Stream removed, broadcasting: 1\nI0819 14:38:23.747254 1251 log.go:181] (0x400003a0b0) Go away received\nI0819 14:38:23.750032 1251 log.go:181] (0x400003a0b0) (0x4000f8c0a0) Stream removed, broadcasting: 1\nI0819 14:38:23.750314 1251 log.go:181] (0x400003a0b0) (0x4000e080a0) Stream removed, broadcasting: 3\nI0819 14:38:23.750496 1251 log.go:181] (0x400003a0b0) (0x40000ba500) Stream removed, broadcasting: 5\n" Aug 19 14:38:23.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:38:23.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 14:38:23.770: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 19 14:38:33.845: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 19 14:38:33.845: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 14:38:34.029: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999862239s Aug 19 14:38:35.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.908528049s Aug 19 14:38:36.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.900821832s Aug 19 14:38:37.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.893384852s Aug 19 14:38:38.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.884823362s Aug 19 14:38:39.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.86045898s Aug 19 14:38:40.092: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.854086454s Aug 19 14:38:41.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.845660913s Aug 19 14:38:42.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.839090299s Aug 19 14:38:43.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 806.725821ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8904 Aug 19 14:38:44.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:38:45.845: INFO: stderr: "I0819 14:38:45.746555 1272 log.go:181] (0x40007b2160) (0x4000980280) Create stream\nI0819 14:38:45.749977 1272 log.go:181] (0x40007b2160) (0x4000980280) Stream added, broadcasting: 1\nI0819 14:38:45.760599 1272 log.go:181] (0x40007b2160) Reply frame received for 1\nI0819 14:38:45.761231 1272 log.go:181] (0x40007b2160) (0x4000578000) Create stream\nI0819 14:38:45.761297 1272 log.go:181] (0x40007b2160) (0x4000578000) Stream added, broadcasting: 3\nI0819 14:38:45.762580 1272 log.go:181] (0x40007b2160) Reply frame received for 3\nI0819 14:38:45.762897 1272 log.go:181] (0x40007b2160) (0x40005780a0) Create stream\nI0819 14:38:45.762992 1272 log.go:181] (0x40007b2160) (0x40005780a0) Stream added, broadcasting: 5\nI0819 14:38:45.764364 1272 log.go:181] (0x40007b2160) Reply frame received for 5\nI0819 14:38:45.820898 1272 log.go:181] (0x40007b2160) Data frame received for 5\nI0819 14:38:45.821684 1272 log.go:181] (0x40007b2160) Data frame received for 3\nI0819 14:38:45.821932 1272 log.go:181] (0x4000578000) (3) Data frame handling\nI0819 14:38:45.822221 1272 log.go:181] (0x40007b2160) Data frame received for 1\nI0819 14:38:45.822392 1272 log.go:181] (0x4000980280) (1) Data frame handling\nI0819 14:38:45.822515 1272 log.go:181] (0x40005780a0) (5) Data frame handling\nI0819 14:38:45.823462 1272 log.go:181] (0x4000980280) (1) Data frame sent\nI0819 14:38:45.823571 1272 log.go:181] (0x4000578000) (3) Data frame sent\nI0819 14:38:45.823774 1272 log.go:181] (0x40005780a0) (5) Data frame sent\nI0819 14:38:45.824421 1272 log.go:181] (0x40007b2160) Data frame received for 3\nI0819 14:38:45.824593 1272 log.go:181] (0x4000578000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 14:38:45.827177 1272 log.go:181] (0x40007b2160) Data frame received for 5\nI0819 14:38:45.828269 1272 log.go:181] (0x40007b2160) (0x4000980280) Stream removed, broadcasting: 1\nI0819 14:38:45.828937 1272 log.go:181] (0x40005780a0) (5) Data frame handling\nI0819 14:38:45.830106 1272 log.go:181] (0x40007b2160) Go away received\nI0819 14:38:45.832307 1272 log.go:181] (0x40007b2160) (0x4000980280) Stream removed, broadcasting: 1\nI0819 14:38:45.832948 1272 log.go:181] (0x40007b2160) (0x4000578000) Stream removed, broadcasting: 3\nI0819 14:38:45.833163 1272 log.go:181] (0x40007b2160) (0x40005780a0) Stream removed, broadcasting: 5\n" Aug 19 14:38:45.846: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 14:38:45.846: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 14:38:45.858: INFO: Found 1 stateful pods, waiting for 3 Aug 19 14:38:55.867: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:38:55.867: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:38:55.868: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 19 14:38:55.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:38:57.884: INFO: stderr: "I0819 14:38:57.753556 1292 log.go:181] (0x4000b14000) (0x4000b61b80) Create stream\nI0819 14:38:57.759611 1292 log.go:181] (0x4000b14000) (0x4000b61b80) Stream added, broadcasting: 1\nI0819 14:38:57.769631 1292 log.go:181] (0x4000b14000) Reply frame received for 1\nI0819 14:38:57.770621 1292 log.go:181] (0x4000b14000) (0x4000855900) Create stream\nI0819 14:38:57.770723 1292 log.go:181] (0x4000b14000) (0x4000855900) Stream added, broadcasting: 3\nI0819 14:38:57.772262 1292 log.go:181] (0x4000b14000) Reply frame received for 3\nI0819 14:38:57.772590 1292 log.go:181] (0x4000b14000) (0x400031a000) Create stream\nI0819 14:38:57.772670 1292 log.go:181] (0x4000b14000) (0x400031a000) Stream added, broadcasting: 5\nI0819 14:38:57.774061 1292 log.go:181] (0x4000b14000) Reply frame received for 5\nI0819 14:38:57.858318 1292 log.go:181] (0x4000b14000) Data frame received for 5\nI0819 14:38:57.858492 1292 log.go:181] (0x4000b14000) Data frame received for 1\nI0819 14:38:57.858752 1292 log.go:181] (0x4000b14000) Data frame received for 3\nI0819 14:38:57.858851 1292 log.go:181] (0x400031a000) (5) Data frame handling\nI0819 14:38:57.859012 1292 log.go:181] (0x4000855900) (3) Data frame handling\nI0819 14:38:57.859172 1292 log.go:181] (0x4000b61b80) (1) Data frame handling\nI0819 14:38:57.859688 1292 log.go:181] (0x400031a000) (5) Data frame sent\nI0819 14:38:57.860114 1292 log.go:181] (0x4000855900) (3) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:38:57.860614 1292 log.go:181] (0x4000b61b80) (1) Data frame sent\nI0819 14:38:57.861519 1292 log.go:181] (0x4000b14000) Data frame received for 5\nI0819 14:38:57.861652 1292 log.go:181] (0x4000b14000) Data frame received for 3\nI0819 14:38:57.861712 1292 log.go:181] (0x4000b14000) (0x4000b61b80) Stream removed, broadcasting: 1\nI0819 14:38:57.862599 1292 log.go:181] (0x400031a000) (5) Data frame handling\nI0819 14:38:57.863360 1292 log.go:181] (0x4000855900) (3) Data frame handling\nI0819 14:38:57.864469 1292 log.go:181] (0x4000b14000) Go away received\nI0819 14:38:57.868355 1292 log.go:181] (0x4000b14000) (0x4000b61b80) Stream removed, broadcasting: 1\nI0819 14:38:57.869080 1292 log.go:181] (0x4000b14000) (0x4000855900) Stream removed, broadcasting: 3\nI0819 14:38:57.869466 1292 log.go:181] (0x4000b14000) (0x400031a000) Stream removed, broadcasting: 5\n" Aug 19 14:38:57.885: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:38:57.885: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 14:38:57.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:39:00.363: INFO: stderr: "I0819 14:38:59.782674 1312 log.go:181] (0x400099ebb0) (0x40005e0460) Create stream\nI0819 14:38:59.788158 1312 log.go:181] (0x400099ebb0) (0x40005e0460) Stream added, broadcasting: 1\nI0819 14:38:59.802763 1312 log.go:181] (0x400099ebb0) Reply frame received for 1\nI0819 14:38:59.803976 1312 log.go:181] (0x400099ebb0) (0x4000ce6f00) Create stream\nI0819 14:38:59.804101 1312 log.go:181] (0x400099ebb0) (0x4000ce6f00) Stream added, broadcasting: 3\nI0819 14:38:59.806529 1312 log.go:181] (0x400099ebb0) Reply frame received for 3\nI0819 14:38:59.807080 1312 log.go:181] (0x400099ebb0) (0x40005e0500) Create stream\nI0819 14:38:59.807200 1312 log.go:181] (0x400099ebb0) (0x40005e0500) Stream added, broadcasting: 5\nI0819 14:38:59.809333 1312 log.go:181] (0x400099ebb0) Reply frame received for 5\nI0819 14:38:59.858137 1312 log.go:181] (0x400099ebb0) Data frame received for 5\nI0819 14:38:59.858477 1312 log.go:181] (0x40005e0500) (5) Data frame handling\nI0819 14:38:59.859340 1312 log.go:181] (0x40005e0500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:39:00.341131 1312 log.go:181] (0x400099ebb0) Data frame received for 3\nI0819 14:39:00.341428 1312 log.go:181] (0x400099ebb0) Data frame received for 5\nI0819 14:39:00.341627 1312 log.go:181] (0x40005e0500) (5) Data frame handling\nI0819 14:39:00.341787 1312 log.go:181] (0x4000ce6f00) (3) Data frame handling\nI0819 14:39:00.341949 1312 log.go:181] (0x4000ce6f00) (3) Data frame sent\nI0819 14:39:00.342096 1312 log.go:181] (0x400099ebb0) Data frame received for 3\nI0819 14:39:00.342216 1312 log.go:181] (0x4000ce6f00) (3) Data frame handling\nI0819 14:39:00.343927 1312 log.go:181] (0x400099ebb0) Data frame received for 1\nI0819 14:39:00.344012 1312 log.go:181] (0x40005e0460) (1) Data frame handling\nI0819 14:39:00.344092 1312 log.go:181] (0x40005e0460) (1) Data frame sent\nI0819 14:39:00.346027 1312 log.go:181] (0x400099ebb0) (0x40005e0460) Stream removed, broadcasting: 1\nI0819 14:39:00.349241 1312 log.go:181] (0x400099ebb0) Go away received\nI0819 14:39:00.352351 1312 log.go:181] (0x400099ebb0) (0x40005e0460) Stream removed, broadcasting: 1\nI0819 14:39:00.352829 1312 log.go:181] (0x400099ebb0) (0x4000ce6f00) Stream removed, broadcasting: 3\nI0819 14:39:00.353103 1312 log.go:181] (0x400099ebb0) (0x40005e0500) Stream removed, broadcasting: 5\n" Aug 19 14:39:00.364: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:39:00.364: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 14:39:00.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:39:02.231: INFO: stderr: "I0819 14:39:02.055293 1333 log.go:181] (0x400003a420) (0x4000c3ff40) Create stream\nI0819 14:39:02.057690 1333 log.go:181] (0x400003a420) (0x4000c3ff40) Stream added, broadcasting: 1\nI0819 14:39:02.069038 1333 log.go:181] (0x400003a420) Reply frame received for 1\nI0819 14:39:02.081527 1333 log.go:181] (0x400003a420) (0x4000a6c140) Create stream\nI0819 14:39:02.081665 1333 log.go:181] (0x400003a420) (0x4000a6c140) Stream added, broadcasting: 3\nI0819 14:39:02.083419 1333 log.go:181] (0x400003a420) Reply frame received for 3\nI0819 14:39:02.083734 1333 log.go:181] (0x400003a420) (0x4000a6caa0) Create stream\nI0819 14:39:02.083803 1333 log.go:181] (0x400003a420) (0x4000a6caa0) Stream added, broadcasting: 5\nI0819 14:39:02.085009 1333 log.go:181] (0x400003a420) Reply frame received for 5\nI0819 14:39:02.172174 1333 log.go:181] (0x400003a420) Data frame received for 5\nI0819 14:39:02.172473 1333 log.go:181] (0x4000a6caa0) (5) Data frame handling\nI0819 14:39:02.173072 1333 log.go:181] (0x4000a6caa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:39:02.213895 1333 log.go:181] (0x400003a420) Data frame received for 3\nI0819 14:39:02.214051 1333 log.go:181] (0x4000a6c140) (3) Data frame handling\nI0819 14:39:02.214140 1333 log.go:181] (0x400003a420) Data frame received for 5\nI0819 14:39:02.214242 1333 log.go:181] (0x4000a6caa0) (5) Data frame handling\nI0819 14:39:02.214332 1333 log.go:181] (0x4000a6c140) (3) Data frame sent\nI0819 14:39:02.214420 1333 log.go:181] (0x400003a420) Data frame received for 3\nI0819 14:39:02.214497 1333 log.go:181] (0x4000a6c140) (3) Data frame handling\nI0819 14:39:02.215108 1333 log.go:181] (0x400003a420) Data frame received for 1\nI0819 14:39:02.215175 1333 log.go:181] (0x4000c3ff40) (1) Data frame handling\nI0819 14:39:02.215243 1333 log.go:181] (0x4000c3ff40) (1) Data frame sent\nI0819 14:39:02.216192 1333 log.go:181] (0x400003a420) (0x4000c3ff40) Stream removed, broadcasting: 1\nI0819 14:39:02.219173 1333 log.go:181] (0x400003a420) Go away received\nI0819 14:39:02.221457 1333 log.go:181] (0x400003a420) (0x4000c3ff40) Stream removed, broadcasting: 1\nI0819 14:39:02.221785 1333 log.go:181] (0x400003a420) (0x4000a6c140) Stream removed, broadcasting: 3\nI0819 14:39:02.222445 1333 log.go:181] (0x400003a420) (0x4000a6caa0) Stream removed, broadcasting: 5\n" Aug 19 14:39:02.231: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:39:02.231: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 14:39:02.231: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 14:39:02.237: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 19 14:39:12.355: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 19 14:39:12.355: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 19 14:39:12.355: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 19 14:39:12.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999992509s Aug 19 14:39:13.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.956004205s Aug 19 14:39:14.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.945765299s Aug 19 14:39:15.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.936955216s Aug 19 14:39:16.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927658586s Aug 19 14:39:17.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.917943196s Aug 19 14:39:18.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.736441719s Aug 19 14:39:19.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.726834409s Aug 19 14:39:20.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.677207448s Aug 19 14:39:21.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 668.751129ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8904 Aug 19 14:39:22.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:39:24.606: INFO: stderr: "I0819 14:39:24.484549 1353 log.go:181] (0x4000d16000) (0x4000f30000) Create stream\nI0819 14:39:24.487556 1353 log.go:181] (0x4000d16000) (0x4000f30000) Stream added, broadcasting: 1\nI0819 14:39:24.504396 1353 log.go:181] (0x4000d16000) Reply frame received for 1\nI0819 14:39:24.505214 1353 log.go:181] (0x4000d16000) (0x400013f220) Create stream\nI0819 14:39:24.505288 1353 log.go:181] (0x4000d16000) (0x400013f220) Stream added, broadcasting: 3\nI0819 14:39:24.506528 1353 log.go:181] (0x4000d16000) Reply frame received for 3\nI0819 14:39:24.506806 1353 log.go:181] (0x4000d16000) (0x4000388960) Create stream\nI0819 14:39:24.506872 1353 log.go:181] (0x4000d16000) (0x4000388960) Stream added, broadcasting: 5\nI0819 14:39:24.508000 1353 log.go:181] (0x4000d16000) Reply frame received for 5\nI0819 14:39:24.579912 1353 log.go:181] (0x4000d16000) Data frame received for 5\nI0819 14:39:24.580278 1353 log.go:181] (0x4000d16000) Data frame received for 3\nI0819 14:39:24.580388 1353 log.go:181] (0x4000388960) (5) Data frame handling\nI0819 14:39:24.581057 1353 log.go:181] (0x400013f220) (3) Data frame handling\nI0819 14:39:24.581454 1353 log.go:181] (0x4000d16000) Data frame received for 1\nI0819 14:39:24.581714 1353 log.go:181] (0x4000f30000) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 14:39:24.582954 1353 log.go:181] (0x400013f220) (3) Data frame sent\nI0819 14:39:24.583153 1353 log.go:181] (0x4000f30000) (1) Data frame sent\nI0819 14:39:24.583432 1353 log.go:181] (0x4000388960) (5) Data frame sent\nI0819 14:39:24.583588 1353 log.go:181] (0x4000d16000) Data frame received for 5\nI0819 14:39:24.583729 1353 log.go:181] (0x4000388960) (5) Data frame handling\nI0819 14:39:24.583822 1353 log.go:181] (0x4000d16000) Data frame received for 3\nI0819 14:39:24.584031 1353 log.go:181] (0x4000d16000) (0x4000f30000) Stream removed, broadcasting: 1\nI0819 14:39:24.585224 1353 log.go:181] (0x400013f220) (3) Data frame handling\nI0819 14:39:24.588016 1353 log.go:181] (0x4000d16000) Go away received\nI0819 14:39:24.591998 1353 log.go:181] (0x4000d16000) (0x4000f30000) Stream removed, broadcasting: 1\nI0819 14:39:24.593134 1353 log.go:181] (0x4000d16000) (0x400013f220) Stream removed, broadcasting: 3\nI0819 14:39:24.593625 1353 log.go:181] (0x4000d16000) (0x4000388960) Stream removed, broadcasting: 5\n" Aug 19 14:39:24.607: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 14:39:24.607: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 14:39:24.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:39:26.666: INFO: stderr: "I0819 14:39:26.565101 1373 log.go:181] (0x4000552160) (0x40008f40a0) Create stream\nI0819 14:39:26.569750 1373 log.go:181] (0x4000552160) (0x40008f40a0) Stream added, broadcasting: 1\nI0819 14:39:26.592564 1373 log.go:181] (0x4000552160) Reply frame received for 1\nI0819 14:39:26.593659 1373 log.go:181] (0x4000552160) (0x4000d980a0) Create stream\nI0819 14:39:26.593779 1373 log.go:181] (0x4000552160) (0x4000d980a0) Stream added, broadcasting: 3\nI0819 14:39:26.595545 1373 log.go:181] (0x4000552160) Reply frame received for 3\nI0819 14:39:26.595854 1373 log.go:181] (0x4000552160) (0x4000c07cc0) Create stream\nI0819 14:39:26.595933 1373 log.go:181] (0x4000552160) (0x4000c07cc0) Stream added, broadcasting: 5\nI0819 14:39:26.597310 1373 log.go:181] (0x4000552160) Reply frame received for 5\nI0819 14:39:26.646000 1373 log.go:181] (0x4000552160) Data frame received for 3\nI0819 14:39:26.646340 1373 log.go:181] (0x4000552160) Data frame received for 5\nI0819 14:39:26.646471 1373 log.go:181] (0x4000c07cc0) (5) Data frame handling\nI0819 14:39:26.646591 1373 log.go:181] (0x4000d980a0) (3) Data frame handling\nI0819 14:39:26.646846 1373 log.go:181] (0x4000552160) Data frame received for 1\nI0819 14:39:26.646934 1373 log.go:181] (0x40008f40a0) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 14:39:26.648026 1373 log.go:181] (0x4000d980a0) (3) Data frame sent\nI0819 14:39:26.648205 1373 log.go:181] (0x4000552160) Data frame received for 3\nI0819 14:39:26.648281 1373 log.go:181] (0x4000d980a0) (3) Data frame handling\nI0819 14:39:26.648387 1373 log.go:181] (0x4000c07cc0) (5) Data frame sent\nI0819 14:39:26.648477 1373 log.go:181] (0x40008f40a0) (1) Data frame sent\nI0819 14:39:26.648642 1373 log.go:181] (0x4000552160) Data frame received for 5\nI0819 14:39:26.648911 1373 log.go:181] (0x4000c07cc0) (5) Data frame handling\nI0819 14:39:26.649640 1373 log.go:181] (0x4000552160) (0x40008f40a0) Stream removed, broadcasting: 1\nI0819 14:39:26.652408 1373 log.go:181] (0x4000552160) Go away received\nI0819 14:39:26.654662 1373 log.go:181] (0x4000552160) (0x40008f40a0) Stream removed, broadcasting: 1\nI0819 14:39:26.654959 1373 log.go:181] (0x4000552160) (0x4000d980a0) Stream removed, broadcasting: 3\nI0819 14:39:26.655260 1373 log.go:181] (0x4000552160) (0x4000c07cc0) Stream removed, broadcasting: 5\n" Aug 19 14:39:26.667: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 14:39:26.667: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 14:39:26.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:39:28.157: INFO: rc: 1 Aug 19 14:39:28.158: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Aug 19 14:39:38.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:39:39.724: INFO: rc: 1 Aug 19 14:39:39.724: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:39:49.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:39:51.154: INFO: rc: 1 Aug 19 14:39:51.155: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:01.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:40:03.098: INFO: rc: 1 Aug 19 14:40:03.098: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:13.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:40:14.511: INFO: rc: 1 Aug 19 14:40:14.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:24.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:40:25.959: INFO: rc: 1 Aug 19 14:40:25.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:35.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:40:37.297: INFO: rc: 1 Aug 19 14:40:37.298: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:47.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:40:48.696: INFO: rc: 1 Aug 19 14:40:48.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:40:58.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:00.006: INFO: rc: 1 Aug 19 14:41:00.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:41:10.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:11.408: INFO: rc: 1 Aug 19 14:41:11.408: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:41:21.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:22.850: INFO: rc: 1 Aug 19 14:41:22.851: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:41:32.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:34.292: INFO: rc: 1 Aug 19 14:41:34.293: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:41:44.293: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:45.581: INFO: rc: 1 Aug 19 14:41:45.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:41:55.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:41:56.890: INFO: rc: 1 Aug 19 14:41:56.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:42:06.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:42:08.303: INFO: rc: 1 Aug 19 14:42:08.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:42:18.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:42:19.746: INFO: rc: 1 Aug 19 14:42:19.746: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:42:29.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:42:31.148: INFO: rc: 1 Aug 19 14:42:31.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:42:41.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:42:42.593: INFO: rc: 1 Aug 19 14:42:42.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:42:52.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:42:53.940: INFO: rc: 1 Aug 19 14:42:53.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:43:03.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:43:05.244: INFO: rc: 1 Aug 19 14:43:05.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:43:15.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:43:16.631: INFO: rc: 1 Aug 19 14:43:16.632: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:43:26.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:43:27.975: INFO: rc: 1 Aug 19 14:43:27.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:43:37.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:43:39.765: INFO: rc: 1 Aug 19 14:43:39.766: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:43:49.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:43:51.269: INFO: rc: 1 Aug 19 14:43:51.270: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:44:01.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:44:02.781: INFO: rc: 1 Aug 19 14:44:02.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:44:12.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:44:14.300: INFO: rc: 1 Aug 19 14:44:14.300: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:44:24.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:44:25.667: INFO: rc: 1 Aug 19 14:44:25.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Aug 19 14:44:35.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8904 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:44:37.002: INFO: rc: 1 Aug 19 14:44:37.002: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Aug 19 14:44:37.003: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 14:44:37.015: INFO: Deleting all statefulset in ns statefulset-8904 Aug 19 14:44:37.020: INFO: Scaling statefulset ss to 0 Aug 19 14:44:37.032: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 14:44:37.037: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:44:37.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8904" for this suite. • [SLOW TEST:420.155 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":128,"skipped":1900,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:44:37.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6b45f583-9003-41a7-8f65-5ca7a4322c57 in namespace container-probe-8843 Aug 19 14:44:41.178: INFO: Started pod liveness-6b45f583-9003-41a7-8f65-5ca7a4322c57 in namespace container-probe-8843 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 14:44:41.183: INFO: Initial restart count of pod liveness-6b45f583-9003-41a7-8f65-5ca7a4322c57 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:48:42.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8843" for this suite. • [SLOW TEST:246.259 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":1911,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:48:43.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-96defa30-41ad-4ad6-9e74-3460f4d51cb3 STEP: Creating a pod to test consume configMaps Aug 19 14:48:44.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833" in namespace "configmap-6238" to be "Succeeded or Failed" Aug 19 14:48:44.344: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Pending", Reason="", readiness=false. Elapsed: 119.47039ms Aug 19 14:48:46.387: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162229524s Aug 19 14:48:48.726: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501469376s Aug 19 14:48:50.946: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Pending", Reason="", readiness=false. Elapsed: 6.721939325s Aug 19 14:48:53.125: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Pending", Reason="", readiness=false. Elapsed: 8.900139144s Aug 19 14:48:55.132: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907099793s STEP: Saw pod success Aug 19 14:48:55.132: INFO: Pod "pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833" satisfied condition "Succeeded or Failed" Aug 19 14:48:55.159: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833 container configmap-volume-test: STEP: delete the pod Aug 19 14:48:55.227: INFO: Waiting for pod pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833 to disappear Aug 19 14:48:55.231: INFO: Pod pod-configmaps-e09b8b43-a610-43db-a926-e70576d49833 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:48:55.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6238" for this suite. • [SLOW TEST:11.918 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":1915,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:48:55.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:48:55.768: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 19 14:49:16.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-425 create -f -' Aug 19 14:49:24.153: INFO: stderr: "" Aug 19 14:49:24.153: INFO: stdout: "e2e-test-crd-publish-openapi-3318-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 19 14:49:24.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-425 delete e2e-test-crd-publish-openapi-3318-crds test-cr' Aug 19 14:49:25.566: INFO: stderr: "" Aug 19 14:49:25.566: INFO: stdout: "e2e-test-crd-publish-openapi-3318-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 19 14:49:25.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-425 apply -f -' Aug 19 14:49:28.236: INFO: stderr: "" Aug 19 14:49:28.236: INFO: stdout: "e2e-test-crd-publish-openapi-3318-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 19 14:49:28.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-425 delete e2e-test-crd-publish-openapi-3318-crds test-cr' Aug 19 14:49:29.916: INFO: stderr: "" Aug 19 14:49:29.916: INFO: stdout: "e2e-test-crd-publish-openapi-3318-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 19 14:49:29.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3318-crds' Aug 19 14:49:33.496: INFO: stderr: "" Aug 19 14:49:33.496: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3318-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:49:54.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-425" for this suite. • [SLOW TEST:59.701 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":131,"skipped":1934,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:49:54.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:49:55.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b" in namespace "projected-6597" to be "Succeeded or Failed" Aug 19 14:49:55.082: INFO: Pod "downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.490266ms Aug 19 14:49:57.086: INFO: Pod "downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024510022s Aug 19 14:49:59.092: INFO: Pod "downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030571265s STEP: Saw pod success Aug 19 14:49:59.092: INFO: Pod "downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b" satisfied condition "Succeeded or Failed" Aug 19 14:49:59.097: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b container client-container: STEP: delete the pod Aug 19 14:49:59.162: INFO: Waiting for pod downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b to disappear Aug 19 14:49:59.180: INFO: Pod downwardapi-volume-c59fa49e-aac4-40a6-a12a-54e400ffb82b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:49:59.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6597" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":1938,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:49:59.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-fee20ea0-8aa6-4cb5-9cc7-ea051e5e68fc STEP: Creating a pod to test consume configMaps Aug 19 14:49:59.278: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1" in namespace "projected-9282" to be "Succeeded or Failed" Aug 19 14:49:59.291: INFO: Pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.910651ms Aug 19 14:50:01.351: INFO: Pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072959249s Aug 19 14:50:03.437: INFO: Pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158273867s Aug 19 14:50:05.443: INFO: Pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164997703s STEP: Saw pod success Aug 19 14:50:05.444: INFO: Pod "pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1" satisfied condition "Succeeded or Failed" Aug 19 14:50:05.457: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1 container projected-configmap-volume-test: STEP: delete the pod Aug 19 14:50:05.496: INFO: Waiting for pod pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1 to disappear Aug 19 14:50:05.513: INFO: Pod pod-projected-configmaps-becbece6-7d6b-4d18-8f5a-fd85c8d992b1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:50:05.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9282" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":1938,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:50:05.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 19 14:50:17.677: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:17.677: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:17.731740 10 log.go:181] (0x40016c0fd0) (0x40012c83c0) Create stream I0819 14:50:17.731849 10 log.go:181] (0x40016c0fd0) (0x40012c83c0) Stream added, broadcasting: 1 I0819 14:50:17.734664 10 log.go:181] (0x40016c0fd0) Reply frame received for 1 I0819 14:50:17.734848 10 log.go:181] (0x40016c0fd0) (0x4002528b40) Create stream I0819 14:50:17.734937 10 log.go:181] (0x40016c0fd0) (0x4002528b40) Stream added, broadcasting: 3 I0819 14:50:17.736026 10 log.go:181] (0x40016c0fd0) Reply frame received for 3 I0819 14:50:17.736123 10 log.go:181] (0x40016c0fd0) (0x4002528c80) Create stream I0819 14:50:17.736181 10 log.go:181] (0x40016c0fd0) (0x4002528c80) Stream added, broadcasting: 5 I0819 14:50:17.737474 10 log.go:181] (0x40016c0fd0) Reply frame received for 5 I0819 14:50:17.820489 10 log.go:181] (0x40016c0fd0) Data frame received for 3 I0819 14:50:17.820657 10 log.go:181] (0x4002528b40) (3) Data frame handling I0819 14:50:17.820924 10 log.go:181] (0x40016c0fd0) Data frame received for 5 I0819 14:50:17.821140 10 log.go:181] (0x4002528c80) (5) Data frame handling I0819 14:50:17.821364 10 log.go:181] (0x4002528b40) (3) Data frame sent I0819 14:50:17.821526 10 log.go:181] (0x40016c0fd0) Data frame received for 3 I0819 14:50:17.822047 10 log.go:181] (0x4002528b40) (3) Data frame handling I0819 14:50:17.822156 10 log.go:181] (0x40016c0fd0) Data frame received for 1 I0819 14:50:17.822224 10 log.go:181] (0x40012c83c0) (1) Data frame handling I0819 14:50:17.822287 10 log.go:181] (0x40012c83c0) (1) Data frame sent I0819 14:50:17.822345 10 log.go:181] (0x40016c0fd0) (0x40012c83c0) Stream removed, broadcasting: 1 I0819 14:50:17.822416 10 log.go:181] (0x40016c0fd0) Go away received I0819 14:50:17.822751 10 log.go:181] (0x40016c0fd0) (0x40012c83c0) Stream removed, broadcasting: 1 I0819 14:50:17.822859 10 log.go:181] (0x40016c0fd0) (0x4002528b40) Stream removed, broadcasting: 3 I0819 14:50:17.822949 10 log.go:181] (0x40016c0fd0) (0x4002528c80) Stream removed, broadcasting: 5 Aug 19 14:50:17.822: INFO: Exec stderr: "" Aug 19 14:50:17.823: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:17.823: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:17.873075 10 log.go:181] (0x40021a20b0) (0x4002529180) Create stream I0819 14:50:17.873202 10 log.go:181] (0x40021a20b0) (0x4002529180) Stream added, broadcasting: 1 I0819 14:50:17.877354 10 log.go:181] (0x40021a20b0) Reply frame received for 1 I0819 14:50:17.877511 10 log.go:181] (0x40021a20b0) (0x4002529220) Create stream I0819 14:50:17.877634 10 log.go:181] (0x40021a20b0) (0x4002529220) Stream added, broadcasting: 3 I0819 14:50:17.879043 10 log.go:181] (0x40021a20b0) Reply frame received for 3 I0819 14:50:17.879169 10 log.go:181] (0x40021a20b0) (0x40025292c0) Create stream I0819 14:50:17.879233 10 log.go:181] (0x40021a20b0) (0x40025292c0) Stream added, broadcasting: 5 I0819 14:50:17.880344 10 log.go:181] (0x40021a20b0) Reply frame received for 5 I0819 14:50:17.930699 10 log.go:181] (0x40021a20b0) Data frame received for 3 I0819 14:50:17.930885 10 log.go:181] (0x4002529220) (3) Data frame handling I0819 14:50:17.931129 10 log.go:181] (0x40021a20b0) Data frame received for 5 I0819 14:50:17.931285 10 log.go:181] (0x40025292c0) (5) Data frame handling I0819 14:50:17.931415 10 log.go:181] (0x4002529220) (3) Data frame sent I0819 14:50:17.931531 10 log.go:181] (0x40021a20b0) Data frame received for 3 I0819 14:50:17.931632 10 log.go:181] (0x4002529220) (3) Data frame handling I0819 14:50:17.932341 10 log.go:181] (0x40021a20b0) Data frame received for 1 I0819 14:50:17.932508 10 log.go:181] (0x4002529180) (1) Data frame handling I0819 14:50:17.932627 10 log.go:181] (0x4002529180) (1) Data frame sent I0819 14:50:17.932842 10 log.go:181] (0x40021a20b0) (0x4002529180) Stream removed, broadcasting: 1 I0819 14:50:17.932981 10 log.go:181] (0x40021a20b0) Go away received I0819 14:50:17.933152 10 log.go:181] (0x40021a20b0) (0x4002529180) Stream removed, broadcasting: 1 I0819 14:50:17.933260 10 log.go:181] (0x40021a20b0) (0x4002529220) Stream removed, broadcasting: 3 I0819 14:50:17.933411 10 log.go:181] (0x40021a20b0) (0x40025292c0) Stream removed, broadcasting: 5 Aug 19 14:50:17.933: INFO: Exec stderr: "" Aug 19 14:50:17.933: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:17.934: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:17.981224 10 log.go:181] (0x4001d10370) (0x40025ee0a0) Create stream I0819 14:50:17.981349 10 log.go:181] (0x4001d10370) (0x40025ee0a0) Stream added, broadcasting: 1 I0819 14:50:17.984189 10 log.go:181] (0x4001d10370) Reply frame received for 1 I0819 14:50:17.984343 10 log.go:181] (0x4001d10370) (0x40025ee280) Create stream I0819 14:50:17.984417 10 log.go:181] (0x4001d10370) (0x40025ee280) Stream added, broadcasting: 3 I0819 14:50:17.986031 10 log.go:181] (0x4001d10370) Reply frame received for 3 I0819 14:50:17.986187 10 log.go:181] (0x4001d10370) (0x40012c8500) Create stream I0819 14:50:17.986241 10 log.go:181] (0x4001d10370) (0x40012c8500) Stream added, broadcasting: 5 I0819 14:50:17.987711 10 log.go:181] (0x4001d10370) Reply frame received for 5 I0819 14:50:18.035709 10 log.go:181] (0x4001d10370) Data frame received for 3 I0819 14:50:18.035878 10 log.go:181] (0x40025ee280) (3) Data frame handling I0819 14:50:18.036044 10 log.go:181] (0x4001d10370) Data frame received for 5 I0819 14:50:18.036214 10 log.go:181] (0x40025ee280) (3) Data frame sent I0819 14:50:18.036382 10 log.go:181] (0x4001d10370) Data frame received for 3 I0819 14:50:18.036521 10 log.go:181] (0x4001d10370) Data frame received for 1 I0819 14:50:18.036633 10 log.go:181] (0x40025ee0a0) (1) Data frame handling I0819 14:50:18.036862 10 log.go:181] (0x40025ee280) (3) Data frame handling I0819 14:50:18.037013 10 log.go:181] (0x40012c8500) (5) Data frame handling I0819 14:50:18.037141 10 log.go:181] (0x40025ee0a0) (1) Data frame sent I0819 14:50:18.037251 10 log.go:181] (0x4001d10370) (0x40025ee0a0) Stream removed, broadcasting: 1 I0819 14:50:18.037359 10 log.go:181] (0x4001d10370) Go away received I0819 14:50:18.037727 10 log.go:181] (0x4001d10370) (0x40025ee0a0) Stream removed, broadcasting: 1 I0819 14:50:18.037982 10 log.go:181] (0x4001d10370) (0x40025ee280) Stream removed, broadcasting: 3 I0819 14:50:18.038089 10 log.go:181] (0x4001d10370) (0x40012c8500) Stream removed, broadcasting: 5 Aug 19 14:50:18.038: INFO: Exec stderr: "" Aug 19 14:50:18.038: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.038: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.098499 10 log.go:181] (0x40016c1600) (0x40012c88c0) Create stream I0819 14:50:18.098624 10 log.go:181] (0x40016c1600) (0x40012c88c0) Stream added, broadcasting: 1 I0819 14:50:18.101503 10 log.go:181] (0x40016c1600) Reply frame received for 1 I0819 14:50:18.101692 10 log.go:181] (0x40016c1600) (0x4002529360) Create stream I0819 14:50:18.101770 10 log.go:181] (0x40016c1600) (0x4002529360) Stream added, broadcasting: 3 I0819 14:50:18.102914 10 log.go:181] (0x40016c1600) Reply frame received for 3 I0819 14:50:18.103018 10 log.go:181] (0x40016c1600) (0x40012c8a00) Create stream I0819 14:50:18.103076 10 log.go:181] (0x40016c1600) (0x40012c8a00) Stream added, broadcasting: 5 I0819 14:50:18.104232 10 log.go:181] (0x40016c1600) Reply frame received for 5 I0819 14:50:18.164559 10 log.go:181] (0x40016c1600) Data frame received for 5 I0819 14:50:18.164846 10 log.go:181] (0x40012c8a00) (5) Data frame handling I0819 14:50:18.165007 10 log.go:181] (0x40016c1600) Data frame received for 3 I0819 14:50:18.165125 10 log.go:181] (0x4002529360) (3) Data frame handling I0819 14:50:18.165288 10 log.go:181] (0x4002529360) (3) Data frame sent I0819 14:50:18.165400 10 log.go:181] (0x40016c1600) Data frame received for 3 I0819 14:50:18.165495 10 log.go:181] (0x4002529360) (3) Data frame handling I0819 14:50:18.165953 10 log.go:181] (0x40016c1600) Data frame received for 1 I0819 14:50:18.166061 10 log.go:181] (0x40012c88c0) (1) Data frame handling I0819 14:50:18.166218 10 log.go:181] (0x40012c88c0) (1) Data frame sent I0819 14:50:18.166387 10 log.go:181] (0x40016c1600) (0x40012c88c0) Stream removed, broadcasting: 1 I0819 14:50:18.166545 10 log.go:181] (0x40016c1600) Go away received I0819 14:50:18.166890 10 log.go:181] (0x40016c1600) (0x40012c88c0) Stream removed, broadcasting: 1 I0819 14:50:18.166955 10 log.go:181] (0x40016c1600) (0x4002529360) Stream removed, broadcasting: 3 I0819 14:50:18.167014 10 log.go:181] (0x40016c1600) (0x40012c8a00) Stream removed, broadcasting: 5 Aug 19 14:50:18.167: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 19 14:50:18.167: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.167: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.221704 10 log.go:181] (0x4001c56b00) (0x4001e1ef00) Create stream I0819 14:50:18.221840 10 log.go:181] (0x4001c56b00) (0x4001e1ef00) Stream added, broadcasting: 1 I0819 14:50:18.225039 10 log.go:181] (0x4001c56b00) Reply frame received for 1 I0819 14:50:18.225278 10 log.go:181] (0x4001c56b00) (0x4001c1a820) Create stream I0819 14:50:18.225395 10 log.go:181] (0x4001c56b00) (0x4001c1a820) Stream added, broadcasting: 3 I0819 14:50:18.227088 10 log.go:181] (0x4001c56b00) Reply frame received for 3 I0819 14:50:18.227211 10 log.go:181] (0x4001c56b00) (0x4001c1a8c0) Create stream I0819 14:50:18.227278 10 log.go:181] (0x4001c56b00) (0x4001c1a8c0) Stream added, broadcasting: 5 I0819 14:50:18.228356 10 log.go:181] (0x4001c56b00) Reply frame received for 5 I0819 14:50:18.282641 10 log.go:181] (0x4001c56b00) Data frame received for 3 I0819 14:50:18.282799 10 log.go:181] (0x4001c1a820) (3) Data frame handling I0819 14:50:18.282911 10 log.go:181] (0x4001c56b00) Data frame received for 5 I0819 14:50:18.283010 10 log.go:181] (0x4001c1a8c0) (5) Data frame handling I0819 14:50:18.283135 10 log.go:181] (0x4001c1a820) (3) Data frame sent I0819 14:50:18.283193 10 log.go:181] (0x4001c56b00) Data frame received for 3 I0819 14:50:18.283247 10 log.go:181] (0x4001c1a820) (3) Data frame handling I0819 14:50:18.283521 10 log.go:181] (0x4001c56b00) Data frame received for 1 I0819 14:50:18.283576 10 log.go:181] (0x4001e1ef00) (1) Data frame handling I0819 14:50:18.283644 10 log.go:181] (0x4001e1ef00) (1) Data frame sent I0819 14:50:18.283722 10 log.go:181] (0x4001c56b00) (0x4001e1ef00) Stream removed, broadcasting: 1 I0819 14:50:18.283901 10 log.go:181] (0x4001c56b00) Go away received I0819 14:50:18.284015 10 log.go:181] (0x4001c56b00) (0x4001e1ef00) Stream removed, broadcasting: 1 I0819 14:50:18.284081 10 log.go:181] (0x4001c56b00) (0x4001c1a820) Stream removed, broadcasting: 3 I0819 14:50:18.284137 10 log.go:181] (0x4001c56b00) (0x4001c1a8c0) Stream removed, broadcasting: 5 Aug 19 14:50:18.284: INFO: Exec stderr: "" Aug 19 14:50:18.284: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.284: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.340329 10 log.go:181] (0x4001c57130) (0x4001e1f2c0) Create stream I0819 14:50:18.340422 10 log.go:181] (0x4001c57130) (0x4001e1f2c0) Stream added, broadcasting: 1 I0819 14:50:18.343303 10 log.go:181] (0x4001c57130) Reply frame received for 1 I0819 14:50:18.343524 10 log.go:181] (0x4001c57130) (0x4001e1f360) Create stream I0819 14:50:18.343660 10 log.go:181] (0x4001c57130) (0x4001e1f360) Stream added, broadcasting: 3 I0819 14:50:18.345624 10 log.go:181] (0x4001c57130) Reply frame received for 3 I0819 14:50:18.345800 10 log.go:181] (0x4001c57130) (0x40025ee320) Create stream I0819 14:50:18.345934 10 log.go:181] (0x4001c57130) (0x40025ee320) Stream added, broadcasting: 5 I0819 14:50:18.347877 10 log.go:181] (0x4001c57130) Reply frame received for 5 I0819 14:50:18.397186 10 log.go:181] (0x4001c57130) Data frame received for 5 I0819 14:50:18.397330 10 log.go:181] (0x40025ee320) (5) Data frame handling I0819 14:50:18.397447 10 log.go:181] (0x4001c57130) Data frame received for 3 I0819 14:50:18.397551 10 log.go:181] (0x4001e1f360) (3) Data frame handling I0819 14:50:18.397686 10 log.go:181] (0x4001e1f360) (3) Data frame sent I0819 14:50:18.397830 10 log.go:181] (0x4001c57130) Data frame received for 3 I0819 14:50:18.397923 10 log.go:181] (0x4001e1f360) (3) Data frame handling I0819 14:50:18.399017 10 log.go:181] (0x4001c57130) Data frame received for 1 I0819 14:50:18.399172 10 log.go:181] (0x4001e1f2c0) (1) Data frame handling I0819 14:50:18.399333 10 log.go:181] (0x4001e1f2c0) (1) Data frame sent I0819 14:50:18.399507 10 log.go:181] (0x4001c57130) (0x4001e1f2c0) Stream removed, broadcasting: 1 I0819 14:50:18.399691 10 log.go:181] (0x4001c57130) Go away received I0819 14:50:18.399985 10 log.go:181] (0x4001c57130) (0x4001e1f2c0) Stream removed, broadcasting: 1 I0819 14:50:18.400102 10 log.go:181] (0x4001c57130) (0x4001e1f360) Stream removed, broadcasting: 3 I0819 14:50:18.400191 10 log.go:181] (0x4001c57130) (0x40025ee320) Stream removed, broadcasting: 5 Aug 19 14:50:18.400: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 19 14:50:18.400: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.400: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.460416 10 log.go:181] (0x40026920b0) (0x4006478320) Create stream I0819 14:50:18.460543 10 log.go:181] (0x40026920b0) (0x4006478320) Stream added, broadcasting: 1 I0819 14:50:18.463463 10 log.go:181] (0x40026920b0) Reply frame received for 1 I0819 14:50:18.463704 10 log.go:181] (0x40026920b0) (0x4002529400) Create stream I0819 14:50:18.463814 10 log.go:181] (0x40026920b0) (0x4002529400) Stream added, broadcasting: 3 I0819 14:50:18.465469 10 log.go:181] (0x40026920b0) Reply frame received for 3 I0819 14:50:18.465643 10 log.go:181] (0x40026920b0) (0x40025ee460) Create stream I0819 14:50:18.465746 10 log.go:181] (0x40026920b0) (0x40025ee460) Stream added, broadcasting: 5 I0819 14:50:18.467185 10 log.go:181] (0x40026920b0) Reply frame received for 5 I0819 14:50:18.511836 10 log.go:181] (0x40026920b0) Data frame received for 3 I0819 14:50:18.511974 10 log.go:181] (0x4002529400) (3) Data frame handling I0819 14:50:18.512082 10 log.go:181] (0x40026920b0) Data frame received for 5 I0819 14:50:18.512251 10 log.go:181] (0x40025ee460) (5) Data frame handling I0819 14:50:18.512371 10 log.go:181] (0x4002529400) (3) Data frame sent I0819 14:50:18.512498 10 log.go:181] (0x40026920b0) Data frame received for 3 I0819 14:50:18.512600 10 log.go:181] (0x4002529400) (3) Data frame handling I0819 14:50:18.512879 10 log.go:181] (0x40026920b0) Data frame received for 1 I0819 14:50:18.512952 10 log.go:181] (0x4006478320) (1) Data frame handling I0819 14:50:18.513025 10 log.go:181] (0x4006478320) (1) Data frame sent I0819 14:50:18.513118 10 log.go:181] (0x40026920b0) (0x4006478320) Stream removed, broadcasting: 1 I0819 14:50:18.513215 10 log.go:181] (0x40026920b0) Go away received I0819 14:50:18.513464 10 log.go:181] (0x40026920b0) (0x4006478320) Stream removed, broadcasting: 1 I0819 14:50:18.513554 10 log.go:181] (0x40026920b0) (0x4002529400) Stream removed, broadcasting: 3 I0819 14:50:18.513630 10 log.go:181] (0x40026920b0) (0x40025ee460) Stream removed, broadcasting: 5 Aug 19 14:50:18.513: INFO: Exec stderr: "" Aug 19 14:50:18.513: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.513: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.578107 10 log.go:181] (0x400688af20) (0x40003ad900) Create stream I0819 14:50:18.578215 10 log.go:181] (0x400688af20) (0x40003ad900) Stream added, broadcasting: 1 I0819 14:50:18.580837 10 log.go:181] (0x400688af20) Reply frame received for 1 I0819 14:50:18.580914 10 log.go:181] (0x400688af20) (0x40003ada40) Create stream I0819 14:50:18.580957 10 log.go:181] (0x400688af20) (0x40003ada40) Stream added, broadcasting: 3 I0819 14:50:18.581842 10 log.go:181] (0x400688af20) Reply frame received for 3 I0819 14:50:18.581991 10 log.go:181] (0x400688af20) (0x40064783c0) Create stream I0819 14:50:18.582061 10 log.go:181] (0x400688af20) (0x40064783c0) Stream added, broadcasting: 5 I0819 14:50:18.583012 10 log.go:181] (0x400688af20) Reply frame received for 5 I0819 14:50:18.640395 10 log.go:181] (0x400688af20) Data frame received for 5 I0819 14:50:18.640521 10 log.go:181] (0x40064783c0) (5) Data frame handling I0819 14:50:18.640619 10 log.go:181] (0x400688af20) Data frame received for 3 I0819 14:50:18.640715 10 log.go:181] (0x40003ada40) (3) Data frame handling I0819 14:50:18.640916 10 log.go:181] (0x40003ada40) (3) Data frame sent I0819 14:50:18.641025 10 log.go:181] (0x400688af20) Data frame received for 3 I0819 14:50:18.641154 10 log.go:181] (0x40003ada40) (3) Data frame handling I0819 14:50:18.641293 10 log.go:181] (0x400688af20) Data frame received for 1 I0819 14:50:18.641428 10 log.go:181] (0x40003ad900) (1) Data frame handling I0819 14:50:18.641515 10 log.go:181] (0x40003ad900) (1) Data frame sent I0819 14:50:18.641597 10 log.go:181] (0x400688af20) (0x40003ad900) Stream removed, broadcasting: 1 I0819 14:50:18.641692 10 log.go:181] (0x400688af20) Go away received I0819 14:50:18.641947 10 log.go:181] (0x400688af20) (0x40003ad900) Stream removed, broadcasting: 1 I0819 14:50:18.642038 10 log.go:181] (0x400688af20) (0x40003ada40) Stream removed, broadcasting: 3 I0819 14:50:18.642117 10 log.go:181] (0x400688af20) (0x40064783c0) Stream removed, broadcasting: 5 Aug 19 14:50:18.642: INFO: Exec stderr: "" Aug 19 14:50:18.642: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.642: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.701907 10 log.go:181] (0x400688b550) (0x40003adf40) Create stream I0819 14:50:18.702028 10 log.go:181] (0x400688b550) (0x40003adf40) Stream added, broadcasting: 1 I0819 14:50:18.706295 10 log.go:181] (0x400688b550) Reply frame received for 1 I0819 14:50:18.706470 10 log.go:181] (0x400688b550) (0x4001e1f4a0) Create stream I0819 14:50:18.706546 10 log.go:181] (0x400688b550) (0x4001e1f4a0) Stream added, broadcasting: 3 I0819 14:50:18.707447 10 log.go:181] (0x400688b550) Reply frame received for 3 I0819 14:50:18.707560 10 log.go:181] (0x400688b550) (0x4001e1f5e0) Create stream I0819 14:50:18.707615 10 log.go:181] (0x400688b550) (0x4001e1f5e0) Stream added, broadcasting: 5 I0819 14:50:18.708507 10 log.go:181] (0x400688b550) Reply frame received for 5 I0819 14:50:18.766827 10 log.go:181] (0x400688b550) Data frame received for 5 I0819 14:50:18.766930 10 log.go:181] (0x4001e1f5e0) (5) Data frame handling I0819 14:50:18.767030 10 log.go:181] (0x400688b550) Data frame received for 3 I0819 14:50:18.767118 10 log.go:181] (0x4001e1f4a0) (3) Data frame handling I0819 14:50:18.767197 10 log.go:181] (0x4001e1f4a0) (3) Data frame sent I0819 14:50:18.767254 10 log.go:181] (0x400688b550) Data frame received for 3 I0819 14:50:18.767306 10 log.go:181] (0x4001e1f4a0) (3) Data frame handling I0819 14:50:18.768091 10 log.go:181] (0x400688b550) Data frame received for 1 I0819 14:50:18.768141 10 log.go:181] (0x40003adf40) (1) Data frame handling I0819 14:50:18.768251 10 log.go:181] (0x40003adf40) (1) Data frame sent I0819 14:50:18.768309 10 log.go:181] (0x400688b550) (0x40003adf40) Stream removed, broadcasting: 1 I0819 14:50:18.768372 10 log.go:181] (0x400688b550) Go away received I0819 14:50:18.768582 10 log.go:181] (0x400688b550) (0x40003adf40) Stream removed, broadcasting: 1 I0819 14:50:18.768659 10 log.go:181] (0x400688b550) (0x4001e1f4a0) Stream removed, broadcasting: 3 I0819 14:50:18.768791 10 log.go:181] (0x400688b550) (0x4001e1f5e0) Stream removed, broadcasting: 5 Aug 19 14:50:18.768: INFO: Exec stderr: "" Aug 19 14:50:18.768: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8408 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 14:50:18.769: INFO: >>> kubeConfig: /root/.kube/config I0819 14:50:18.823635 10 log.go:181] (0x40021a2420) (0x40025295e0) Create stream I0819 14:50:18.823770 10 log.go:181] (0x40021a2420) (0x40025295e0) Stream added, broadcasting: 1 I0819 14:50:18.827009 10 log.go:181] (0x40021a2420) Reply frame received for 1 I0819 14:50:18.827224 10 log.go:181] (0x40021a2420) (0x40013d8140) Create stream I0819 14:50:18.827331 10 log.go:181] (0x40021a2420) (0x40013d8140) Stream added, broadcasting: 3 I0819 14:50:18.828603 10 log.go:181] (0x40021a2420) Reply frame received for 3 I0819 14:50:18.828806 10 log.go:181] (0x40021a2420) (0x4002529680) Create stream I0819 14:50:18.828889 10 log.go:181] (0x40021a2420) (0x4002529680) Stream added, broadcasting: 5 I0819 14:50:18.830173 10 log.go:181] (0x40021a2420) Reply frame received for 5 I0819 14:50:18.878402 10 log.go:181] (0x40021a2420) Data frame received for 5 I0819 14:50:18.878531 10 log.go:181] (0x4002529680) (5) Data frame handling I0819 14:50:18.878654 10 log.go:181] (0x40021a2420) Data frame received for 3 I0819 14:50:18.878822 10 log.go:181] (0x40013d8140) (3) Data frame handling I0819 14:50:18.878946 10 log.go:181] (0x40013d8140) (3) Data frame sent I0819 14:50:18.879042 10 log.go:181] (0x40021a2420) Data frame received for 3 I0819 14:50:18.879131 10 log.go:181] (0x40013d8140) (3) Data frame handling I0819 14:50:18.879671 10 log.go:181] (0x40021a2420) Data frame received for 1 I0819 14:50:18.879831 10 log.go:181] (0x40025295e0) (1) Data frame handling I0819 14:50:18.879977 10 log.go:181] (0x40025295e0) (1) Data frame sent I0819 14:50:18.880135 10 log.go:181] (0x40021a2420) (0x40025295e0) Stream removed, broadcasting: 1 I0819 14:50:18.880342 10 log.go:181] (0x40021a2420) Go away received I0819 14:50:18.880616 10 log.go:181] (0x40021a2420) (0x40025295e0) Stream removed, broadcasting: 1 I0819 14:50:18.880826 10 log.go:181] (0x40021a2420) (0x40013d8140) Stream removed, broadcasting: 3 I0819 14:50:18.880982 10 log.go:181] (0x40021a2420) (0x4002529680) Stream removed, broadcasting: 5 Aug 19 14:50:18.881: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:50:18.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8408" for this suite. • [SLOW TEST:13.368 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":1947,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:50:18.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:50:18.971: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:50:25.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-183" for this suite. • [SLOW TEST:6.435 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":1958,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:50:25.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 19 14:50:25.723: INFO: Waiting up to 5m0s for pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985" in namespace "emptydir-447" to be "Succeeded or Failed" Aug 19 14:50:25.999: INFO: Pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985": Phase="Pending", Reason="", readiness=false. Elapsed: 275.831223ms Aug 19 14:50:28.005: INFO: Pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281721006s Aug 19 14:50:30.010: INFO: Pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985": Phase="Running", Reason="", readiness=true. Elapsed: 4.28645265s Aug 19 14:50:32.015: INFO: Pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.291790123s STEP: Saw pod success Aug 19 14:50:32.015: INFO: Pod "pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985" satisfied condition "Succeeded or Failed" Aug 19 14:50:32.020: INFO: Trying to get logs from node latest-worker2 pod pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985 container test-container: STEP: delete the pod Aug 19 14:50:32.037: INFO: Waiting for pod pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985 to disappear Aug 19 14:50:32.040: INFO: Pod pod-1d46aebe-f4bb-40fe-b14a-688a9cccd985 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:50:32.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-447" for this suite. • [SLOW TEST:6.724 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":136,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:50:32.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:50:32.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f" in namespace "downward-api-4785" to be "Succeeded or Failed" Aug 19 14:50:32.168: INFO: Pod "downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576252ms Aug 19 14:50:34.189: INFO: Pod "downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027349702s Aug 19 14:50:36.193: INFO: Pod "downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03171918s STEP: Saw pod success Aug 19 14:50:36.193: INFO: Pod "downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f" satisfied condition "Succeeded or Failed" Aug 19 14:50:36.196: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f container client-container: STEP: delete the pod Aug 19 14:50:36.251: INFO: Waiting for pod downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f to disappear Aug 19 14:50:36.259: INFO: Pod downwardapi-volume-02353805-fbe2-4e5e-8ade-1edf5c62e86f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:50:36.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4785" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":1992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:50:36.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3468, will wait for the garbage collector to delete the pods Aug 19 14:50:42.620: INFO: Deleting Job.batch foo took: 8.421775ms Aug 19 14:50:43.120: INFO: Terminating Job.batch foo pods took: 500.4369ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:51:20.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3468" for this suite. • [SLOW TEST:43.870 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":138,"skipped":2022,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:51:20.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:51:20.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9930" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":139,"skipped":2023,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:51:20.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3831 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3831 STEP: Deleting pre-stop pod Aug 19 14:51:33.608: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:51:33.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3831" for this suite. • [SLOW TEST:13.284 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":140,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:51:33.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-pn4r STEP: Creating a pod to test atomic-volume-subpath Aug 19 14:51:34.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pn4r" in namespace "subpath-4361" to be "Succeeded or Failed" Aug 19 14:51:34.427: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Pending", Reason="", readiness=false. Elapsed: 15.547926ms Aug 19 14:51:36.485: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07378369s Aug 19 14:51:38.491: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 4.079937579s Aug 19 14:51:40.496: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 6.084381805s Aug 19 14:51:42.503: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 8.091884301s Aug 19 14:51:44.509: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 10.097010249s Aug 19 14:51:46.514: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 12.102314571s Aug 19 14:51:48.520: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 14.108494536s Aug 19 14:51:50.525: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 16.113596955s Aug 19 14:51:52.535: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 18.123876519s Aug 19 14:51:54.599: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 20.187866239s Aug 19 14:51:56.606: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Running", Reason="", readiness=true. Elapsed: 22.194734708s Aug 19 14:51:58.614: INFO: Pod "pod-subpath-test-projected-pn4r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.202062454s STEP: Saw pod success Aug 19 14:51:58.614: INFO: Pod "pod-subpath-test-projected-pn4r" satisfied condition "Succeeded or Failed" Aug 19 14:51:58.620: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-pn4r container test-container-subpath-projected-pn4r: STEP: delete the pod Aug 19 14:51:58.864: INFO: Waiting for pod pod-subpath-test-projected-pn4r to disappear Aug 19 14:51:58.878: INFO: Pod pod-subpath-test-projected-pn4r no longer exists STEP: Deleting pod pod-subpath-test-projected-pn4r Aug 19 14:51:58.878: INFO: Deleting pod "pod-subpath-test-projected-pn4r" in namespace "subpath-4361" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:51:58.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4361" for this suite. • [SLOW TEST:25.261 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":141,"skipped":2065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:51:58.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 19 14:51:59.038: INFO: >>> kubeConfig: /root/.kube/config Aug 19 14:52:20.625: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:53:46.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1046" for this suite. • [SLOW TEST:107.280 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":142,"skipped":2100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:53:46.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 14:53:47.274: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743" in namespace "security-context-test-9913" to be "Succeeded or Failed" Aug 19 14:53:47.781: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743": Phase="Pending", Reason="", readiness=false. Elapsed: 507.106138ms Aug 19 14:53:49.786: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512436693s Aug 19 14:53:51.805: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530772284s Aug 19 14:53:53.811: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743": Phase="Running", Reason="", readiness=true. Elapsed: 6.537590422s Aug 19 14:53:55.817: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.543323893s Aug 19 14:53:55.817: INFO: Pod "alpine-nnp-false-c3ceeba3-97be-43e3-a355-a1b51922a743" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:53:55.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9913" for this suite. • [SLOW TEST:9.668 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2135,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:53:55.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 19 14:54:03.557: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:54:03.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3390" for this suite. • [SLOW TEST:7.778 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2140,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:54:03.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 19 14:54:03.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f -' Aug 19 14:54:07.669: INFO: stderr: "" Aug 19 14:54:07.670: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 19 14:54:07.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config diff -f -' Aug 19 14:54:12.645: INFO: rc: 1 Aug 19 14:54:12.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete -f -' Aug 19 14:54:14.100: INFO: stderr: "" Aug 19 14:54:14.100: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:54:14.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6289" for this suite. • [SLOW TEST:10.531 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":145,"skipped":2145,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:54:14.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:54:25.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2729" for this suite. • [SLOW TEST:10.988 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:54:25.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 19 14:54:25.684: INFO: Waiting up to 5m0s for pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c" in namespace "downward-api-4411" to be "Succeeded or Failed" Aug 19 14:54:26.642: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 958.240486ms Aug 19 14:54:28.649: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.965199861s Aug 19 14:54:30.781: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.097645176s Aug 19 14:54:33.039: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355736392s Aug 19 14:54:35.321: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.637113572s STEP: Saw pod success Aug 19 14:54:35.321: INFO: Pod "downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c" satisfied condition "Succeeded or Failed" Aug 19 14:54:35.330: INFO: Trying to get logs from node latest-worker pod downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c container dapi-container: STEP: delete the pod Aug 19 14:54:36.290: INFO: Waiting for pod downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c to disappear Aug 19 14:54:36.294: INFO: Pod downward-api-742cdd55-e0de-45bf-b900-7f2a45249b9c no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:54:36.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4411" for this suite. • [SLOW TEST:11.148 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":147,"skipped":2186,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:54:36.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6141 Aug 19 14:54:46.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 19 14:54:48.277: INFO: stderr: "I0819 14:54:48.163534 2143 log.go:181] (0x4000276160) (0x400019cdc0) Create stream\nI0819 14:54:48.168227 2143 log.go:181] (0x4000276160) (0x400019cdc0) Stream added, broadcasting: 1\nI0819 14:54:48.182396 2143 log.go:181] (0x4000276160) Reply frame received for 1\nI0819 14:54:48.183457 2143 log.go:181] (0x4000276160) (0x400044f5e0) Create stream\nI0819 14:54:48.183581 2143 log.go:181] (0x4000276160) (0x400044f5e0) Stream added, broadcasting: 3\nI0819 14:54:48.185447 2143 log.go:181] (0x4000276160) Reply frame received for 3\nI0819 14:54:48.185817 2143 log.go:181] (0x4000276160) (0x400019d2c0) Create stream\nI0819 14:54:48.185912 2143 log.go:181] (0x4000276160) (0x400019d2c0) Stream added, broadcasting: 5\nI0819 14:54:48.187324 2143 log.go:181] (0x4000276160) Reply frame received for 5\nI0819 14:54:48.252618 2143 log.go:181] (0x4000276160) Data frame received for 5\nI0819 14:54:48.253048 2143 log.go:181] (0x400019d2c0) (5) Data frame handling\nI0819 14:54:48.253802 2143 log.go:181] (0x400019d2c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0819 14:54:48.256108 2143 log.go:181] (0x4000276160) Data frame received for 3\nI0819 14:54:48.256227 2143 log.go:181] (0x400044f5e0) (3) Data frame handling\nI0819 14:54:48.256385 2143 log.go:181] (0x400044f5e0) (3) Data frame sent\nI0819 14:54:48.256914 2143 log.go:181] (0x4000276160) Data frame received for 3\nI0819 14:54:48.256983 2143 log.go:181] (0x400044f5e0) (3) Data frame handling\nI0819 14:54:48.257433 2143 log.go:181] (0x4000276160) Data frame received for 5\nI0819 14:54:48.257564 2143 log.go:181] (0x400019d2c0) (5) Data frame handling\nI0819 14:54:48.259181 2143 log.go:181] (0x4000276160) Data frame received for 1\nI0819 14:54:48.259384 2143 log.go:181] (0x400019cdc0) (1) Data frame handling\nI0819 14:54:48.259530 2143 log.go:181] (0x400019cdc0) (1) Data frame sent\nI0819 14:54:48.260292 2143 log.go:181] (0x4000276160) (0x400019cdc0) Stream removed, broadcasting: 1\nI0819 14:54:48.263502 2143 log.go:181] (0x4000276160) Go away received\nI0819 14:54:48.266984 2143 log.go:181] (0x4000276160) (0x400019cdc0) Stream removed, broadcasting: 1\nI0819 14:54:48.267287 2143 log.go:181] (0x4000276160) (0x400044f5e0) Stream removed, broadcasting: 3\nI0819 14:54:48.267488 2143 log.go:181] (0x4000276160) (0x400019d2c0) Stream removed, broadcasting: 5\n" Aug 19 14:54:48.278: INFO: stdout: "iptables" Aug 19 14:54:48.279: INFO: proxyMode: iptables Aug 19 14:54:48.287: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:54:48.816: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:54:50.817: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:54:50.833: INFO: Pod kube-proxy-mode-detector still exists Aug 19 14:54:52.817: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 19 14:54:52.821: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6141 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6141 I0819 14:54:52.948619 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6141, replica count: 3 I0819 14:54:55.999757 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:54:59.000293 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 14:55:02.000902 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 14:55:02.017: INFO: Creating new exec pod Aug 19 14:55:11.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 19 14:55:12.923: INFO: stderr: "I0819 14:55:12.830576 2163 log.go:181] (0x4000144370) (0x40006ba000) Create stream\nI0819 14:55:12.832921 2163 log.go:181] (0x4000144370) (0x40006ba000) Stream added, broadcasting: 1\nI0819 14:55:12.843536 2163 log.go:181] (0x4000144370) Reply frame received for 1\nI0819 14:55:12.844718 2163 log.go:181] (0x4000144370) (0x4000998000) Create stream\nI0819 14:55:12.844900 2163 log.go:181] (0x4000144370) (0x4000998000) Stream added, broadcasting: 3\nI0819 14:55:12.846599 2163 log.go:181] (0x4000144370) Reply frame received for 3\nI0819 14:55:12.847036 2163 log.go:181] (0x4000144370) (0x40008a8960) Create stream\nI0819 14:55:12.847143 2163 log.go:181] (0x4000144370) (0x40008a8960) Stream added, broadcasting: 5\nI0819 14:55:12.848416 2163 log.go:181] (0x4000144370) Reply frame received for 5\nI0819 14:55:12.907244 2163 log.go:181] (0x4000144370) Data frame received for 5\nI0819 14:55:12.907682 2163 log.go:181] (0x4000144370) Data frame received for 3\nI0819 14:55:12.907798 2163 log.go:181] (0x4000998000) (3) Data frame handling\nI0819 14:55:12.907907 2163 log.go:181] (0x40008a8960) (5) Data frame handling\nI0819 14:55:12.908117 2163 log.go:181] (0x4000144370) Data frame received for 1\nI0819 14:55:12.908189 2163 log.go:181] (0x40006ba000) (1) Data frame handling\nI0819 14:55:12.908825 2163 log.go:181] (0x40008a8960) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0819 14:55:12.909451 2163 log.go:181] (0x40006ba000) (1) Data frame sent\nI0819 14:55:12.910162 2163 log.go:181] (0x4000144370) Data frame received for 5\nI0819 14:55:12.910246 2163 log.go:181] (0x40008a8960) (5) Data frame handling\nI0819 14:55:12.910335 2163 log.go:181] (0x40008a8960) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0819 14:55:12.910431 2163 log.go:181] (0x4000144370) Data frame received for 5\nI0819 14:55:12.910524 2163 log.go:181] (0x40008a8960) (5) Data frame handling\nI0819 14:55:12.912511 2163 log.go:181] (0x4000144370) (0x40006ba000) Stream removed, broadcasting: 1\nI0819 14:55:12.913774 2163 log.go:181] (0x4000144370) Go away received\nI0819 14:55:12.915931 2163 log.go:181] (0x4000144370) (0x40006ba000) Stream removed, broadcasting: 1\nI0819 14:55:12.916335 2163 log.go:181] (0x4000144370) (0x4000998000) Stream removed, broadcasting: 3\nI0819 14:55:12.916521 2163 log.go:181] (0x4000144370) (0x40008a8960) Stream removed, broadcasting: 5\n" Aug 19 14:55:12.924: INFO: stdout: "" Aug 19 14:55:12.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c nc -zv -t -w 2 10.102.137.69 80' Aug 19 14:55:14.512: INFO: stderr: "I0819 14:55:14.425858 2183 log.go:181] (0x4000d0d3f0) (0x4000d04500) Create stream\nI0819 14:55:14.430735 2183 log.go:181] (0x4000d0d3f0) (0x4000d04500) Stream added, broadcasting: 1\nI0819 14:55:14.447964 2183 log.go:181] (0x4000d0d3f0) Reply frame received for 1\nI0819 14:55:14.448528 2183 log.go:181] (0x4000d0d3f0) (0x4000ae6000) Create stream\nI0819 14:55:14.448589 2183 log.go:181] (0x4000d0d3f0) (0x4000ae6000) Stream added, broadcasting: 3\nI0819 14:55:14.449738 2183 log.go:181] (0x4000d0d3f0) Reply frame received for 3\nI0819 14:55:14.449965 2183 log.go:181] (0x4000d0d3f0) (0x4000ae60a0) Create stream\nI0819 14:55:14.450031 2183 log.go:181] (0x4000d0d3f0) (0x4000ae60a0) Stream added, broadcasting: 5\nI0819 14:55:14.450951 2183 log.go:181] (0x4000d0d3f0) Reply frame received for 5\nI0819 14:55:14.496535 2183 log.go:181] (0x4000d0d3f0) Data frame received for 1\nI0819 14:55:14.496922 2183 log.go:181] (0x4000d0d3f0) Data frame received for 3\nI0819 14:55:14.497002 2183 log.go:181] (0x4000ae6000) (3) Data frame handling\nI0819 14:55:14.497148 2183 log.go:181] (0x4000d0d3f0) Data frame received for 5\nI0819 14:55:14.497204 2183 log.go:181] (0x4000ae60a0) (5) Data frame handling\nI0819 14:55:14.497334 2183 log.go:181] (0x4000d04500) (1) Data frame handling\n+ nc -zv -t -w 2 10.102.137.69 80\nConnection to 10.102.137.69 80 port [tcp/http] succeeded!\nI0819 14:55:14.498577 2183 log.go:181] (0x4000ae60a0) (5) Data frame sent\nI0819 14:55:14.498800 2183 log.go:181] (0x4000d04500) (1) Data frame sent\nI0819 14:55:14.499225 2183 log.go:181] (0x4000d0d3f0) Data frame received for 5\nI0819 14:55:14.499278 2183 log.go:181] (0x4000ae60a0) (5) Data frame handling\nI0819 14:55:14.501053 2183 log.go:181] (0x4000d0d3f0) (0x4000d04500) Stream removed, broadcasting: 1\nI0819 14:55:14.501679 2183 log.go:181] (0x4000d0d3f0) Go away received\nI0819 14:55:14.503752 2183 log.go:181] (0x4000d0d3f0) (0x4000d04500) Stream removed, broadcasting: 1\nI0819 14:55:14.503974 2183 log.go:181] (0x4000d0d3f0) (0x4000ae6000) Stream removed, broadcasting: 3\nI0819 14:55:14.504117 2183 log.go:181] (0x4000d0d3f0) (0x4000ae60a0) Stream removed, broadcasting: 5\n" Aug 19 14:55:14.513: INFO: stdout: "" Aug 19 14:55:14.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31508' Aug 19 14:55:16.107: INFO: stderr: "I0819 14:55:16.007778 2203 log.go:181] (0x40001ba370) (0x4000a6c140) Create stream\nI0819 14:55:16.011549 2203 log.go:181] (0x40001ba370) (0x4000a6c140) Stream added, broadcasting: 1\nI0819 14:55:16.020521 2203 log.go:181] (0x40001ba370) Reply frame received for 1\nI0819 14:55:16.021082 2203 log.go:181] (0x40001ba370) (0x4000850000) Create stream\nI0819 14:55:16.021142 2203 log.go:181] (0x40001ba370) (0x4000850000) Stream added, broadcasting: 3\nI0819 14:55:16.022530 2203 log.go:181] (0x40001ba370) Reply frame received for 3\nI0819 14:55:16.022980 2203 log.go:181] (0x40001ba370) (0x40000ec1e0) Create stream\nI0819 14:55:16.023095 2203 log.go:181] (0x40001ba370) (0x40000ec1e0) Stream added, broadcasting: 5\nI0819 14:55:16.024591 2203 log.go:181] (0x40001ba370) Reply frame received for 5\nI0819 14:55:16.087915 2203 log.go:181] (0x40001ba370) Data frame received for 3\nI0819 14:55:16.088179 2203 log.go:181] (0x40001ba370) Data frame received for 1\nI0819 14:55:16.088847 2203 log.go:181] (0x40001ba370) Data frame received for 5\nI0819 14:55:16.089072 2203 log.go:181] (0x40000ec1e0) (5) Data frame handling\nI0819 14:55:16.089301 2203 log.go:181] (0x4000a6c140) (1) Data frame handling\nI0819 14:55:16.089771 2203 log.go:181] (0x4000850000) (3) Data frame handling\nI0819 14:55:16.091974 2203 log.go:181] (0x40000ec1e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 31508\nConnection to 172.18.0.11 31508 port [tcp/31508] succeeded!\nI0819 14:55:16.092623 2203 log.go:181] (0x40001ba370) Data frame received for 5\nI0819 14:55:16.092840 2203 log.go:181] (0x40000ec1e0) (5) Data frame handling\nI0819 14:55:16.093571 2203 log.go:181] (0x4000a6c140) (1) Data frame sent\nI0819 14:55:16.094493 2203 log.go:181] (0x40001ba370) (0x4000a6c140) Stream removed, broadcasting: 1\nI0819 14:55:16.095934 2203 log.go:181] (0x40001ba370) Go away received\nI0819 14:55:16.098899 2203 log.go:181] (0x40001ba370) (0x4000a6c140) Stream removed, broadcasting: 1\nI0819 14:55:16.099285 2203 log.go:181] (0x40001ba370) (0x4000850000) Stream removed, broadcasting: 3\nI0819 14:55:16.099809 2203 log.go:181] (0x40001ba370) (0x40000ec1e0) Stream removed, broadcasting: 5\n" Aug 19 14:55:16.108: INFO: stdout: "" Aug 19 14:55:16.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31508' Aug 19 14:55:18.161: INFO: stderr: "I0819 14:55:18.058280 2224 log.go:181] (0x40006abad0) (0x40009b4820) Create stream\nI0819 14:55:18.060645 2224 log.go:181] (0x40006abad0) (0x40009b4820) Stream added, broadcasting: 1\nI0819 14:55:18.080122 2224 log.go:181] (0x40006abad0) Reply frame received for 1\nI0819 14:55:18.080632 2224 log.go:181] (0x40006abad0) (0x40006a2000) Create stream\nI0819 14:55:18.080690 2224 log.go:181] (0x40006abad0) (0x40006a2000) Stream added, broadcasting: 3\nI0819 14:55:18.081853 2224 log.go:181] (0x40006abad0) Reply frame received for 3\nI0819 14:55:18.082133 2224 log.go:181] (0x40006abad0) (0x40006a20a0) Create stream\nI0819 14:55:18.082196 2224 log.go:181] (0x40006abad0) (0x40006a20a0) Stream added, broadcasting: 5\nI0819 14:55:18.083291 2224 log.go:181] (0x40006abad0) Reply frame received for 5\nI0819 14:55:18.140007 2224 log.go:181] (0x40006abad0) Data frame received for 3\nI0819 14:55:18.140691 2224 log.go:181] (0x40006abad0) Data frame received for 5\nI0819 14:55:18.140921 2224 log.go:181] (0x40006a20a0) (5) Data frame handling\nI0819 14:55:18.141049 2224 log.go:181] (0x40006a2000) (3) Data frame handling\nI0819 14:55:18.142217 2224 log.go:181] (0x40006a20a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31508\nConnection to 172.18.0.14 31508 port [tcp/31508] succeeded!\nI0819 14:55:18.142952 2224 log.go:181] (0x40006abad0) Data frame received for 5\nI0819 14:55:18.143060 2224 log.go:181] (0x40006a20a0) (5) Data frame handling\nI0819 14:55:18.143765 2224 log.go:181] (0x40006abad0) Data frame received for 1\nI0819 14:55:18.143867 2224 log.go:181] (0x40009b4820) (1) Data frame handling\nI0819 14:55:18.143987 2224 log.go:181] (0x40009b4820) (1) Data frame sent\nI0819 14:55:18.145564 2224 log.go:181] (0x40006abad0) (0x40009b4820) Stream removed, broadcasting: 1\nI0819 14:55:18.147476 2224 log.go:181] (0x40006abad0) Go away received\nI0819 14:55:18.150197 2224 log.go:181] (0x40006abad0) (0x40009b4820) Stream removed, broadcasting: 1\nI0819 14:55:18.150478 2224 log.go:181] (0x40006abad0) (0x40006a2000) Stream removed, broadcasting: 3\nI0819 14:55:18.150661 2224 log.go:181] (0x40006abad0) (0x40006a20a0) Stream removed, broadcasting: 5\n" Aug 19 14:55:18.162: INFO: stdout: "" Aug 19 14:55:18.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31508/ ; done' Aug 19 14:55:20.286: INFO: stderr: "I0819 14:55:20.060935 2244 log.go:181] (0x400003a0b0) (0x400098a000) Create stream\nI0819 14:55:20.065565 2244 log.go:181] (0x400003a0b0) (0x400098a000) Stream added, broadcasting: 1\nI0819 14:55:20.081265 2244 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0819 14:55:20.082548 2244 log.go:181] (0x400003a0b0) (0x400098a140) Create stream\nI0819 14:55:20.082682 2244 log.go:181] (0x400003a0b0) (0x400098a140) Stream added, broadcasting: 3\nI0819 14:55:20.084214 2244 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0819 14:55:20.084474 2244 log.go:181] (0x400003a0b0) (0x4000f0c000) Create stream\nI0819 14:55:20.084545 2244 log.go:181] (0x400003a0b0) (0x4000f0c000) Stream added, broadcasting: 5\nI0819 14:55:20.086095 2244 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0819 14:55:20.166281 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.166742 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.166873 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.166986 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.167730 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.169098 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.170133 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.170236 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.170385 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.170694 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.170812 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.170922 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.171014 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.171105 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.171208 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.177288 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.177402 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.177567 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.177999 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.178136 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.178228 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.178348 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.178420 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.178655 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.183054 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.183210 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.183362 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.183843 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.183937 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.184012 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.184079 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.184187 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.184289 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.187614 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.187713 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.188250 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.188927 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.189214 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.189302 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.189386 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.189469 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.189566 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.197597 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.197703 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.197846 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.200559 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.200638 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.200800 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.200917 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.201009 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.201078 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.210585 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.210646 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.210705 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.211221 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.211304 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.211374 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.211465 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.211548 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.211635 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.216021 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.216100 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.216201 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.216338 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.216393 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.216445 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.216496 2244 log.go:181] (0x400003a0b0) Data frame received for 5\n+ echo\nI0819 14:55:20.216548 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.216599 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.216685 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.216852 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.216929 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.220633 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.220822 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.220944 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.221378 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.221447 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.221527 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.221641 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.221713 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.221794 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.226119 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.226248 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.226349 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.226476 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.226549 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.226619 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.226679 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.226725 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.226780 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.231578 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.231704 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.231850 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.232135 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.232267 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.232402 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.232530 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.232671 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.232876 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.235524 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.235642 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.235777 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.236632 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.236888 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0819 14:55:20.237039 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.237174 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.237327 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.237445 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.237518 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.237575 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.237660 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n 2 http://172.18.0.11:31508/\nI0819 14:55:20.241253 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.241380 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.241511 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.241765 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.241838 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.241937 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.242051 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.242209 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.242352 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.245410 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.245507 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.245636 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.245987 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.246073 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.246148 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.246232 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.246296 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.246355 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.250078 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.250159 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.250284 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.250737 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.250810 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.250884 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:20.250950 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.251039 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.251129 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.255584 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.255731 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.255917 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.256030 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.256133 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.256228 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0819 14:55:20.256312 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.256418 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.256563 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.256861 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\n http://172.18.0.11:31508/\nI0819 14:55:20.257055 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.257198 2244 log.go:181] (0x4000f0c000) (5) Data frame sent\nI0819 14:55:20.262178 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.262297 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.262483 2244 log.go:181] (0x400098a140) (3) Data frame sent\nI0819 14:55:20.263075 2244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:55:20.263241 2244 log.go:181] (0x400098a140) (3) Data frame handling\nI0819 14:55:20.263510 2244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:55:20.263636 2244 log.go:181] (0x4000f0c000) (5) Data frame handling\nI0819 14:55:20.273229 2244 log.go:181] (0x400003a0b0) Data frame received for 1\nI0819 14:55:20.273368 2244 log.go:181] (0x400098a000) (1) Data frame handling\nI0819 14:55:20.273457 2244 log.go:181] (0x400098a000) (1) Data frame sent\nI0819 14:55:20.273855 2244 log.go:181] (0x400003a0b0) (0x400098a000) Stream removed, broadcasting: 1\nI0819 14:55:20.276050 2244 log.go:181] (0x400003a0b0) Go away received\nI0819 14:55:20.279389 2244 log.go:181] (0x400003a0b0) (0x400098a000) Stream removed, broadcasting: 1\nI0819 14:55:20.279652 2244 log.go:181] (0x400003a0b0) (0x400098a140) Stream removed, broadcasting: 3\nI0819 14:55:20.279822 2244 log.go:181] (0x400003a0b0) (0x4000f0c000) Stream removed, broadcasting: 5\n" Aug 19 14:55:20.291: INFO: stdout: "\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8\naffinity-nodeport-timeout-9z7x8" Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.291: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Received response from host: affinity-nodeport-timeout-9z7x8 Aug 19 14:55:20.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31508/' Aug 19 14:55:22.302: INFO: stderr: "I0819 14:55:22.202208 2264 log.go:181] (0x40001d91e0) (0x4000720460) Create stream\nI0819 14:55:22.205240 2264 log.go:181] (0x40001d91e0) (0x4000720460) Stream added, broadcasting: 1\nI0819 14:55:22.215982 2264 log.go:181] (0x40001d91e0) Reply frame received for 1\nI0819 14:55:22.217082 2264 log.go:181] (0x40001d91e0) (0x4000898000) Create stream\nI0819 14:55:22.217182 2264 log.go:181] (0x40001d91e0) (0x4000898000) Stream added, broadcasting: 3\nI0819 14:55:22.219440 2264 log.go:181] (0x40001d91e0) Reply frame received for 3\nI0819 14:55:22.219885 2264 log.go:181] (0x40001d91e0) (0x4000720500) Create stream\nI0819 14:55:22.219975 2264 log.go:181] (0x40001d91e0) (0x4000720500) Stream added, broadcasting: 5\nI0819 14:55:22.221496 2264 log.go:181] (0x40001d91e0) Reply frame received for 5\nI0819 14:55:22.282627 2264 log.go:181] (0x40001d91e0) Data frame received for 5\nI0819 14:55:22.283005 2264 log.go:181] (0x4000720500) (5) Data frame handling\nI0819 14:55:22.283636 2264 log.go:181] (0x40001d91e0) Data frame received for 3\nI0819 14:55:22.283728 2264 log.go:181] (0x4000898000) (3) Data frame handling\nI0819 14:55:22.283822 2264 log.go:181] (0x4000720500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:22.285218 2264 log.go:181] (0x4000898000) (3) Data frame sent\nI0819 14:55:22.285395 2264 log.go:181] (0x40001d91e0) Data frame received for 3\nI0819 14:55:22.285509 2264 log.go:181] (0x40001d91e0) Data frame received for 5\nI0819 14:55:22.285637 2264 log.go:181] (0x4000720500) (5) Data frame handling\nI0819 14:55:22.285746 2264 log.go:181] (0x4000898000) (3) Data frame handling\nI0819 14:55:22.286210 2264 log.go:181] (0x40001d91e0) Data frame received for 1\nI0819 14:55:22.286293 2264 log.go:181] (0x4000720460) (1) Data frame handling\nI0819 14:55:22.286366 2264 log.go:181] (0x4000720460) (1) Data frame sent\nI0819 14:55:22.287969 2264 log.go:181] (0x40001d91e0) (0x4000720460) Stream removed, broadcasting: 1\nI0819 14:55:22.289829 2264 log.go:181] (0x40001d91e0) Go away received\nI0819 14:55:22.294207 2264 log.go:181] (0x40001d91e0) (0x4000720460) Stream removed, broadcasting: 1\nI0819 14:55:22.294585 2264 log.go:181] (0x40001d91e0) (0x4000898000) Stream removed, broadcasting: 3\nI0819 14:55:22.294845 2264 log.go:181] (0x40001d91e0) (0x4000720500) Stream removed, broadcasting: 5\n" Aug 19 14:55:22.303: INFO: stdout: "affinity-nodeport-timeout-9z7x8" Aug 19 14:55:37.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31508/' Aug 19 14:55:38.865: INFO: stderr: "I0819 14:55:38.740365 2284 log.go:181] (0x40000c31e0) (0x400069c3c0) Create stream\nI0819 14:55:38.743227 2284 log.go:181] (0x40000c31e0) (0x400069c3c0) Stream added, broadcasting: 1\nI0819 14:55:38.752454 2284 log.go:181] (0x40000c31e0) Reply frame received for 1\nI0819 14:55:38.753398 2284 log.go:181] (0x40000c31e0) (0x400069c460) Create stream\nI0819 14:55:38.753475 2284 log.go:181] (0x40000c31e0) (0x400069c460) Stream added, broadcasting: 3\nI0819 14:55:38.754609 2284 log.go:181] (0x40000c31e0) Reply frame received for 3\nI0819 14:55:38.754870 2284 log.go:181] (0x40000c31e0) (0x400069c500) Create stream\nI0819 14:55:38.754925 2284 log.go:181] (0x40000c31e0) (0x400069c500) Stream added, broadcasting: 5\nI0819 14:55:38.756227 2284 log.go:181] (0x40000c31e0) Reply frame received for 5\nI0819 14:55:38.831915 2284 log.go:181] (0x40000c31e0) Data frame received for 5\nI0819 14:55:38.832319 2284 log.go:181] (0x400069c500) (5) Data frame handling\nI0819 14:55:38.833653 2284 log.go:181] (0x400069c500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:38.835212 2284 log.go:181] (0x40000c31e0) Data frame received for 3\nI0819 14:55:38.835416 2284 log.go:181] (0x400069c460) (3) Data frame handling\nI0819 14:55:38.835561 2284 log.go:181] (0x400069c460) (3) Data frame sent\nI0819 14:55:38.835803 2284 log.go:181] (0x40000c31e0) Data frame received for 3\nI0819 14:55:38.835936 2284 log.go:181] (0x400069c460) (3) Data frame handling\nI0819 14:55:38.836130 2284 log.go:181] (0x40000c31e0) Data frame received for 5\nI0819 14:55:38.836218 2284 log.go:181] (0x400069c500) (5) Data frame handling\nI0819 14:55:38.837006 2284 log.go:181] (0x40000c31e0) Data frame received for 1\nI0819 14:55:38.837131 2284 log.go:181] (0x400069c3c0) (1) Data frame handling\nI0819 14:55:38.837265 2284 log.go:181] (0x400069c3c0) (1) Data frame sent\nI0819 14:55:38.838649 2284 log.go:181] (0x40000c31e0) (0x400069c3c0) Stream removed, broadcasting: 1\nI0819 14:55:38.840346 2284 log.go:181] (0x40000c31e0) Go away received\nI0819 14:55:38.857237 2284 log.go:181] (0x40000c31e0) (0x400069c3c0) Stream removed, broadcasting: 1\nI0819 14:55:38.857528 2284 log.go:181] (0x40000c31e0) (0x400069c460) Stream removed, broadcasting: 3\nI0819 14:55:38.857710 2284 log.go:181] (0x40000c31e0) (0x400069c500) Stream removed, broadcasting: 5\n" Aug 19 14:55:38.866: INFO: stdout: "affinity-nodeport-timeout-9z7x8" Aug 19 14:55:53.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6141 execpod-affinityqhlts -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:31508/' Aug 19 14:55:55.346: INFO: stderr: "I0819 14:55:55.222750 2304 log.go:181] (0x400054a420) (0x4000574500) Create stream\nI0819 14:55:55.228553 2304 log.go:181] (0x400054a420) (0x4000574500) Stream added, broadcasting: 1\nI0819 14:55:55.242738 2304 log.go:181] (0x400054a420) Reply frame received for 1\nI0819 14:55:55.243593 2304 log.go:181] (0x400054a420) (0x4000b22280) Create stream\nI0819 14:55:55.243639 2304 log.go:181] (0x400054a420) (0x4000b22280) Stream added, broadcasting: 3\nI0819 14:55:55.245074 2304 log.go:181] (0x400054a420) Reply frame received for 3\nI0819 14:55:55.245409 2304 log.go:181] (0x400054a420) (0x40006e20a0) Create stream\nI0819 14:55:55.245483 2304 log.go:181] (0x400054a420) (0x40006e20a0) Stream added, broadcasting: 5\nI0819 14:55:55.246741 2304 log.go:181] (0x400054a420) Reply frame received for 5\nI0819 14:55:55.327675 2304 log.go:181] (0x400054a420) Data frame received for 5\nI0819 14:55:55.327970 2304 log.go:181] (0x40006e20a0) (5) Data frame handling\nI0819 14:55:55.328704 2304 log.go:181] (0x40006e20a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31508/\nI0819 14:55:55.331300 2304 log.go:181] (0x400054a420) Data frame received for 3\nI0819 14:55:55.331343 2304 log.go:181] (0x4000b22280) (3) Data frame handling\nI0819 14:55:55.331388 2304 log.go:181] (0x4000b22280) (3) Data frame sent\nI0819 14:55:55.332034 2304 log.go:181] (0x400054a420) Data frame received for 5\nI0819 14:55:55.332242 2304 log.go:181] (0x40006e20a0) (5) Data frame handling\nI0819 14:55:55.332486 2304 log.go:181] (0x400054a420) Data frame received for 3\nI0819 14:55:55.332615 2304 log.go:181] (0x4000b22280) (3) Data frame handling\nI0819 14:55:55.333182 2304 log.go:181] (0x400054a420) Data frame received for 1\nI0819 14:55:55.333300 2304 log.go:181] (0x4000574500) (1) Data frame handling\nI0819 14:55:55.333408 2304 log.go:181] (0x4000574500) (1) Data frame sent\nI0819 14:55:55.334474 2304 log.go:181] (0x400054a420) (0x4000574500) Stream removed, broadcasting: 1\nI0819 14:55:55.337033 2304 log.go:181] (0x400054a420) Go away received\nI0819 14:55:55.339006 2304 log.go:181] (0x400054a420) (0x4000574500) Stream removed, broadcasting: 1\nI0819 14:55:55.339169 2304 log.go:181] (0x400054a420) (0x4000b22280) Stream removed, broadcasting: 3\nI0819 14:55:55.339286 2304 log.go:181] (0x400054a420) (0x40006e20a0) Stream removed, broadcasting: 5\n" Aug 19 14:55:55.347: INFO: stdout: "affinity-nodeport-timeout-qx42l" Aug 19 14:55:55.347: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6141, will wait for the garbage collector to delete the pods Aug 19 14:55:55.437: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.316441ms Aug 19 14:55:55.939: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 501.676544ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:56:10.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6141" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:93.823 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":148,"skipped":2189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:56:10.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 19 14:56:10.281: INFO: starting watch STEP: patching STEP: updating Aug 19 14:56:10.294: INFO: waiting for watch events with expected annotations Aug 19 14:56:10.295: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:56:10.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5589" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":149,"skipped":2217,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:56:10.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 14:56:10.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270" in namespace "downward-api-700" to be "Succeeded or Failed" Aug 19 14:56:10.446: INFO: Pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270": Phase="Pending", Reason="", readiness=false. Elapsed: 16.772096ms Aug 19 14:56:12.454: INFO: Pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024166448s Aug 19 14:56:14.622: INFO: Pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270": Phase="Running", Reason="", readiness=true. Elapsed: 4.192315389s Aug 19 14:56:16.659: INFO: Pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229336124s STEP: Saw pod success Aug 19 14:56:16.659: INFO: Pod "downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270" satisfied condition "Succeeded or Failed" Aug 19 14:56:16.664: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270 container client-container: STEP: delete the pod Aug 19 14:56:16.871: INFO: Waiting for pod downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270 to disappear Aug 19 14:56:16.877: INFO: Pod downwardapi-volume-975f0429-3248-45eb-be37-4a7fc4954270 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:56:16.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-700" for this suite. • [SLOW TEST:6.531 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:56:16.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 19 14:56:17.022: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix232935885/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:56:18.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8277" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":151,"skipped":2290,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:56:18.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8956 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 19 14:56:18.369: INFO: Found 0 stateful pods, waiting for 3 Aug 19 14:56:28.376: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:56:28.376: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:56:28.376: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 19 14:56:38.378: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:56:38.378: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:56:38.378: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 19 14:56:38.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8956 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:56:40.143: INFO: stderr: "I0819 14:56:39.958705 2344 log.go:181] (0x40000ce0b0) (0x4000b86000) Create stream\nI0819 14:56:39.961953 2344 log.go:181] (0x40000ce0b0) (0x4000b86000) Stream added, broadcasting: 1\nI0819 14:56:39.973479 2344 log.go:181] (0x40000ce0b0) Reply frame received for 1\nI0819 14:56:39.974293 2344 log.go:181] (0x40000ce0b0) (0x4000b860a0) Create stream\nI0819 14:56:39.974365 2344 log.go:181] (0x40000ce0b0) (0x4000b860a0) Stream added, broadcasting: 3\nI0819 14:56:39.975734 2344 log.go:181] (0x40000ce0b0) Reply frame received for 3\nI0819 14:56:39.975932 2344 log.go:181] (0x40000ce0b0) (0x4000e0e000) Create stream\nI0819 14:56:39.975983 2344 log.go:181] (0x40000ce0b0) (0x4000e0e000) Stream added, broadcasting: 5\nI0819 14:56:39.977208 2344 log.go:181] (0x40000ce0b0) Reply frame received for 5\nI0819 14:56:40.056416 2344 log.go:181] (0x40000ce0b0) Data frame received for 5\nI0819 14:56:40.056827 2344 log.go:181] (0x4000e0e000) (5) Data frame handling\nI0819 14:56:40.057626 2344 log.go:181] (0x4000e0e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:56:40.121081 2344 log.go:181] (0x40000ce0b0) Data frame received for 3\nI0819 14:56:40.121322 2344 log.go:181] (0x4000b860a0) (3) Data frame handling\nI0819 14:56:40.121492 2344 log.go:181] (0x40000ce0b0) Data frame received for 5\nI0819 14:56:40.121686 2344 log.go:181] (0x4000e0e000) (5) Data frame handling\nI0819 14:56:40.121958 2344 log.go:181] (0x4000b860a0) (3) Data frame sent\nI0819 14:56:40.122102 2344 log.go:181] (0x40000ce0b0) Data frame received for 3\nI0819 14:56:40.122182 2344 log.go:181] (0x4000b860a0) (3) Data frame handling\nI0819 14:56:40.122396 2344 log.go:181] (0x40000ce0b0) Data frame received for 1\nI0819 14:56:40.122484 2344 log.go:181] (0x4000b86000) (1) Data frame handling\nI0819 14:56:40.122568 2344 log.go:181] (0x4000b86000) (1) Data frame sent\nI0819 14:56:40.126904 2344 log.go:181] (0x40000ce0b0) (0x4000b86000) Stream removed, broadcasting: 1\nI0819 14:56:40.129720 2344 log.go:181] (0x40000ce0b0) Go away received\nI0819 14:56:40.132113 2344 log.go:181] (0x40000ce0b0) (0x4000b86000) Stream removed, broadcasting: 1\nI0819 14:56:40.132699 2344 log.go:181] (0x40000ce0b0) (0x4000b860a0) Stream removed, broadcasting: 3\nI0819 14:56:40.133011 2344 log.go:181] (0x40000ce0b0) (0x4000e0e000) Stream removed, broadcasting: 5\n" Aug 19 14:56:40.145: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:56:40.145: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 19 14:56:50.194: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 19 14:57:00.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8956 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:57:01.935: INFO: stderr: "I0819 14:57:01.850759 2364 log.go:181] (0x400003a0b0) (0x4000248000) Create stream\nI0819 14:57:01.854756 2364 log.go:181] (0x400003a0b0) (0x4000248000) Stream added, broadcasting: 1\nI0819 14:57:01.865291 2364 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0819 14:57:01.865906 2364 log.go:181] (0x400003a0b0) (0x4000d8e000) Create stream\nI0819 14:57:01.865976 2364 log.go:181] (0x400003a0b0) (0x4000d8e000) Stream added, broadcasting: 3\nI0819 14:57:01.867205 2364 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0819 14:57:01.867444 2364 log.go:181] (0x400003a0b0) (0x40005b0f00) Create stream\nI0819 14:57:01.867503 2364 log.go:181] (0x400003a0b0) (0x40005b0f00) Stream added, broadcasting: 5\nI0819 14:57:01.868524 2364 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0819 14:57:01.911515 2364 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:57:01.911916 2364 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:57:01.912165 2364 log.go:181] (0x4000d8e000) (3) Data frame handling\nI0819 14:57:01.912294 2364 log.go:181] (0x400003a0b0) Data frame received for 1\nI0819 14:57:01.912408 2364 log.go:181] (0x4000248000) (1) Data frame handling\nI0819 14:57:01.912581 2364 log.go:181] (0x40005b0f00) (5) Data frame handling\nI0819 14:57:01.914141 2364 log.go:181] (0x4000248000) (1) Data frame sent\nI0819 14:57:01.914454 2364 log.go:181] (0x40005b0f00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 14:57:01.914839 2364 log.go:181] (0x400003a0b0) Data frame received for 5\nI0819 14:57:01.914944 2364 log.go:181] (0x40005b0f00) (5) Data frame handling\nI0819 14:57:01.915185 2364 log.go:181] (0x4000d8e000) (3) Data frame sent\nI0819 14:57:01.915300 2364 log.go:181] (0x400003a0b0) Data frame received for 3\nI0819 14:57:01.917311 2364 log.go:181] (0x400003a0b0) (0x4000248000) Stream removed, broadcasting: 1\nI0819 14:57:01.918285 2364 log.go:181] (0x4000d8e000) (3) Data frame handling\nI0819 14:57:01.918663 2364 log.go:181] (0x400003a0b0) Go away received\nI0819 14:57:01.922500 2364 log.go:181] (0x400003a0b0) (0x4000248000) Stream removed, broadcasting: 1\nI0819 14:57:01.922755 2364 log.go:181] (0x400003a0b0) (0x4000d8e000) Stream removed, broadcasting: 3\nI0819 14:57:01.922920 2364 log.go:181] (0x400003a0b0) (0x40005b0f00) Stream removed, broadcasting: 5\n" Aug 19 14:57:01.935: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 14:57:01.935: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 14:57:12.130: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:57:12.131: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 14:57:12.131: INFO: Waiting for Pod statefulset-8956/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 14:57:22.238: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:57:22.238: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 14:57:32.561: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update STEP: Rolling back to a previous revision Aug 19 14:57:42.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8956 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 14:57:43.868: INFO: stderr: "I0819 14:57:43.705835 2384 log.go:181] (0x400003a420) (0x4000386000) Create stream\nI0819 14:57:43.711796 2384 log.go:181] (0x400003a420) (0x4000386000) Stream added, broadcasting: 1\nI0819 14:57:43.737221 2384 log.go:181] (0x400003a420) Reply frame received for 1\nI0819 14:57:43.738009 2384 log.go:181] (0x400003a420) (0x40003860a0) Create stream\nI0819 14:57:43.738090 2384 log.go:181] (0x400003a420) (0x40003860a0) Stream added, broadcasting: 3\nI0819 14:57:43.739482 2384 log.go:181] (0x400003a420) Reply frame received for 3\nI0819 14:57:43.739695 2384 log.go:181] (0x400003a420) (0x4000bae280) Create stream\nI0819 14:57:43.739748 2384 log.go:181] (0x400003a420) (0x4000bae280) Stream added, broadcasting: 5\nI0819 14:57:43.740794 2384 log.go:181] (0x400003a420) Reply frame received for 5\nI0819 14:57:43.816614 2384 log.go:181] (0x400003a420) Data frame received for 5\nI0819 14:57:43.817122 2384 log.go:181] (0x4000bae280) (5) Data frame handling\nI0819 14:57:43.818034 2384 log.go:181] (0x4000bae280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 14:57:43.848531 2384 log.go:181] (0x400003a420) Data frame received for 3\nI0819 14:57:43.848670 2384 log.go:181] (0x400003a420) Data frame received for 5\nI0819 14:57:43.848991 2384 log.go:181] (0x4000bae280) (5) Data frame handling\nI0819 14:57:43.849330 2384 log.go:181] (0x40003860a0) (3) Data frame handling\nI0819 14:57:43.849551 2384 log.go:181] (0x40003860a0) (3) Data frame sent\nI0819 14:57:43.849704 2384 log.go:181] (0x400003a420) Data frame received for 3\nI0819 14:57:43.849851 2384 log.go:181] (0x40003860a0) (3) Data frame handling\nI0819 14:57:43.850175 2384 log.go:181] (0x400003a420) Data frame received for 1\nI0819 14:57:43.850232 2384 log.go:181] (0x4000386000) (1) Data frame handling\nI0819 14:57:43.850288 2384 log.go:181] (0x4000386000) (1) Data frame sent\nI0819 14:57:43.852348 2384 log.go:181] (0x400003a420) (0x4000386000) Stream removed, broadcasting: 1\nI0819 14:57:43.854946 2384 log.go:181] (0x400003a420) Go away received\nI0819 14:57:43.858684 2384 log.go:181] (0x400003a420) (0x4000386000) Stream removed, broadcasting: 1\nI0819 14:57:43.859378 2384 log.go:181] (0x400003a420) (0x40003860a0) Stream removed, broadcasting: 3\nI0819 14:57:43.860006 2384 log.go:181] (0x400003a420) (0x4000bae280) Stream removed, broadcasting: 5\n" Aug 19 14:57:43.870: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 14:57:43.870: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 14:57:53.915: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 19 14:58:04.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8956 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 14:58:06.195: INFO: stderr: "I0819 14:58:06.071907 2405 log.go:181] (0x4000e188f0) (0x4000afa140) Create stream\nI0819 14:58:06.075594 2405 log.go:181] (0x4000e188f0) (0x4000afa140) Stream added, broadcasting: 1\nI0819 14:58:06.086437 2405 log.go:181] (0x4000e188f0) Reply frame received for 1\nI0819 14:58:06.087236 2405 log.go:181] (0x4000e188f0) (0x40001d4000) Create stream\nI0819 14:58:06.087314 2405 log.go:181] (0x4000e188f0) (0x40001d4000) Stream added, broadcasting: 3\nI0819 14:58:06.088625 2405 log.go:181] (0x4000e188f0) Reply frame received for 3\nI0819 14:58:06.089043 2405 log.go:181] (0x4000e188f0) (0x4000afa1e0) Create stream\nI0819 14:58:06.089118 2405 log.go:181] (0x4000e188f0) (0x4000afa1e0) Stream added, broadcasting: 5\nI0819 14:58:06.090294 2405 log.go:181] (0x4000e188f0) Reply frame received for 5\nI0819 14:58:06.156504 2405 log.go:181] (0x4000e188f0) Data frame received for 5\nI0819 14:58:06.157051 2405 log.go:181] (0x4000afa1e0) (5) Data frame handling\nI0819 14:58:06.157891 2405 log.go:181] (0x4000afa1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 14:58:06.174427 2405 log.go:181] (0x4000e188f0) Data frame received for 3\nI0819 14:58:06.174610 2405 log.go:181] (0x40001d4000) (3) Data frame handling\nI0819 14:58:06.174721 2405 log.go:181] (0x40001d4000) (3) Data frame sent\nI0819 14:58:06.175089 2405 log.go:181] (0x4000e188f0) Data frame received for 3\nI0819 14:58:06.175242 2405 log.go:181] (0x40001d4000) (3) Data frame handling\nI0819 14:58:06.175397 2405 log.go:181] (0x4000e188f0) Data frame received for 5\nI0819 14:58:06.175548 2405 log.go:181] (0x4000afa1e0) (5) Data frame handling\nI0819 14:58:06.175962 2405 log.go:181] (0x4000e188f0) Data frame received for 1\nI0819 14:58:06.176114 2405 log.go:181] (0x4000afa140) (1) Data frame handling\nI0819 14:58:06.176272 2405 log.go:181] (0x4000afa140) (1) Data frame sent\nI0819 14:58:06.178417 2405 log.go:181] (0x4000e188f0) (0x4000afa140) Stream removed, broadcasting: 1\nI0819 14:58:06.181702 2405 log.go:181] (0x4000e188f0) Go away received\nI0819 14:58:06.183364 2405 log.go:181] (0x4000e188f0) (0x4000afa140) Stream removed, broadcasting: 1\nI0819 14:58:06.183978 2405 log.go:181] (0x4000e188f0) (0x40001d4000) Stream removed, broadcasting: 3\nI0819 14:58:06.184912 2405 log.go:181] (0x4000e188f0) (0x4000afa1e0) Stream removed, broadcasting: 5\n" Aug 19 14:58:06.195: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 14:58:06.195: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 14:58:16.479: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:58:16.479: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 19 14:58:16.479: INFO: Waiting for Pod statefulset-8956/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 19 14:58:26.494: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:58:26.495: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 19 14:58:26.495: INFO: Waiting for Pod statefulset-8956/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 19 14:58:36.495: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:58:36.496: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 19 14:58:46.494: INFO: Waiting for StatefulSet statefulset-8956/ss2 to complete update Aug 19 14:58:46.494: INFO: Waiting for Pod statefulset-8956/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 14:58:56.496: INFO: Deleting all statefulset in ns statefulset-8956 Aug 19 14:58:56.501: INFO: Scaling statefulset ss2 to 0 Aug 19 14:59:26.583: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 14:59:26.588: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 14:59:26.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8956" for this suite. • [SLOW TEST:188.562 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":152,"skipped":2292,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 14:59:26.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Aug 19 14:59:27.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-7030 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 19 14:59:38.731: INFO: stderr: "" Aug 19 14:59:38.731: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 19 14:59:38.731: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 19 14:59:38.731: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7030" to be "running and ready, or succeeded" Aug 19 14:59:38.783: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 51.64186ms Aug 19 14:59:40.919: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187401053s Aug 19 14:59:43.021: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289225671s Aug 19 14:59:45.042: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310738197s Aug 19 14:59:47.189: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.457463036s Aug 19 14:59:49.196: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.464398937s Aug 19 14:59:49.196: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 19 14:59:49.196: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 19 14:59:49.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030' Aug 19 14:59:51.117: INFO: stderr: "" Aug 19 14:59:51.117: INFO: stdout: "I0819 14:59:45.437110 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/8n2 305\nI0819 14:59:45.637254 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/tv2 355\nI0819 14:59:45.837334 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/scj 334\nI0819 14:59:46.037360 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/mj5 414\nI0819 14:59:46.237651 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/4p9 415\nI0819 14:59:46.437295 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/mmt 593\nI0819 14:59:46.637270 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/th7g 247\nI0819 14:59:46.837300 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5bx7 573\nI0819 14:59:47.037257 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/cxf 310\nI0819 14:59:47.237258 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/n4pg 497\nI0819 14:59:47.437224 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/v566 513\nI0819 14:59:47.637251 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/tlv9 269\nI0819 14:59:47.837196 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/8dd 556\nI0819 14:59:48.037269 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/jbc 321\nI0819 14:59:48.237272 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/ncdq 266\nI0819 14:59:48.437205 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nd4g 563\nI0819 14:59:48.637270 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/dpm 357\nI0819 14:59:48.837276 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/lv9b 283\nI0819 14:59:49.037230 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/wdvq 203\nI0819 14:59:49.237308 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/zv6s 553\nI0819 14:59:49.437211 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/4cml 515\nI0819 14:59:49.637196 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/cgl 587\nI0819 14:59:49.837231 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/kmv 545\nI0819 14:59:50.037204 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/v89 350\nI0819 14:59:50.237214 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/xvcv 241\nI0819 14:59:50.437190 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/m76 423\nI0819 14:59:50.637213 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/bgvt 288\nI0819 14:59:50.837278 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/c657 577\nI0819 14:59:51.037242 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/l9g 503\n" STEP: limiting log lines Aug 19 14:59:51.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030 --tail=1' Aug 19 14:59:52.974: INFO: stderr: "" Aug 19 14:59:52.974: INFO: stdout: "I0819 14:59:52.837279 1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/nrn 406\n" Aug 19 14:59:52.974: INFO: got output "I0819 14:59:52.837279 1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/nrn 406\n" STEP: limiting log bytes Aug 19 14:59:52.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030 --limit-bytes=1' Aug 19 14:59:54.468: INFO: stderr: "" Aug 19 14:59:54.468: INFO: stdout: "I" Aug 19 14:59:54.469: INFO: got output "I" STEP: exposing timestamps Aug 19 14:59:54.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030 --tail=1 --timestamps' Aug 19 14:59:55.845: INFO: stderr: "" Aug 19 14:59:55.846: INFO: stdout: "2020-08-19T14:59:55.637337990Z I0819 14:59:55.637214 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/ns/pods/f85f 437\n" Aug 19 14:59:55.846: INFO: got output "2020-08-19T14:59:55.637337990Z I0819 14:59:55.637214 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/ns/pods/f85f 437\n" STEP: restricting to a time range Aug 19 14:59:58.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030 --since=1s' Aug 19 14:59:59.908: INFO: stderr: "" Aug 19 14:59:59.908: INFO: stdout: "I0819 14:59:59.037253 1 logs_generator.go:76] 68 GET /api/v1/namespaces/default/pods/hgkp 467\nI0819 14:59:59.237212 1 logs_generator.go:76] 69 POST /api/v1/namespaces/default/pods/68c 416\nI0819 14:59:59.437207 1 logs_generator.go:76] 70 GET /api/v1/namespaces/kube-system/pods/jmxt 594\nI0819 14:59:59.637227 1 logs_generator.go:76] 71 GET /api/v1/namespaces/kube-system/pods/rm4 429\nI0819 14:59:59.837222 1 logs_generator.go:76] 72 POST /api/v1/namespaces/kube-system/pods/wvp 217\n" Aug 19 14:59:59.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7030 --since=24h' Aug 19 15:00:01.322: INFO: stderr: "" Aug 19 15:00:01.322: INFO: stdout: "I0819 14:59:45.437110 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/8n2 305\nI0819 14:59:45.637254 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/tv2 355\nI0819 14:59:45.837334 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/scj 334\nI0819 14:59:46.037360 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/mj5 414\nI0819 14:59:46.237651 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/4p9 415\nI0819 14:59:46.437295 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/mmt 593\nI0819 14:59:46.637270 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/th7g 247\nI0819 14:59:46.837300 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/5bx7 573\nI0819 14:59:47.037257 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/cxf 310\nI0819 14:59:47.237258 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/n4pg 497\nI0819 14:59:47.437224 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/v566 513\nI0819 14:59:47.637251 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/tlv9 269\nI0819 14:59:47.837196 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/8dd 556\nI0819 14:59:48.037269 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/jbc 321\nI0819 14:59:48.237272 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/ncdq 266\nI0819 14:59:48.437205 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/nd4g 563\nI0819 14:59:48.637270 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/dpm 357\nI0819 14:59:48.837276 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/lv9b 283\nI0819 14:59:49.037230 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/wdvq 203\nI0819 14:59:49.237308 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/zv6s 553\nI0819 14:59:49.437211 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/4cml 515\nI0819 14:59:49.637196 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/cgl 587\nI0819 14:59:49.837231 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/kmv 545\nI0819 14:59:50.037204 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/v89 350\nI0819 14:59:50.237214 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/xvcv 241\nI0819 14:59:50.437190 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/m76 423\nI0819 14:59:50.637213 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/bgvt 288\nI0819 14:59:50.837278 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/c657 577\nI0819 14:59:51.037242 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/default/pods/l9g 503\nI0819 14:59:51.237292 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/7vw4 289\nI0819 14:59:51.437259 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/ns/pods/d86 512\nI0819 14:59:51.637223 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/cds8 561\nI0819 14:59:51.837212 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/ns/pods/tpz 270\nI0819 14:59:52.037229 1 logs_generator.go:76] 33 GET /api/v1/namespaces/kube-system/pods/5p9 525\nI0819 14:59:52.237258 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/nxfx 471\nI0819 14:59:52.437248 1 logs_generator.go:76] 35 POST /api/v1/namespaces/default/pods/bq6v 394\nI0819 14:59:52.638049 1 logs_generator.go:76] 36 POST /api/v1/namespaces/ns/pods/mgq 549\nI0819 14:59:52.837279 1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/nrn 406\nI0819 14:59:53.037237 1 logs_generator.go:76] 38 GET /api/v1/namespaces/default/pods/5pw5 250\nI0819 14:59:53.237288 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/xwst 339\nI0819 14:59:53.437245 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/zgg 251\nI0819 14:59:53.637244 1 logs_generator.go:76] 41 PUT /api/v1/namespaces/ns/pods/v5j 327\nI0819 14:59:53.837254 1 logs_generator.go:76] 42 PUT /api/v1/namespaces/ns/pods/5w7 234\nI0819 14:59:54.037191 1 logs_generator.go:76] 43 GET /api/v1/namespaces/default/pods/2jt 296\nI0819 14:59:54.237251 1 logs_generator.go:76] 44 POST /api/v1/namespaces/default/pods/wjn4 221\nI0819 14:59:54.437277 1 logs_generator.go:76] 45 PUT /api/v1/namespaces/ns/pods/dtkn 308\nI0819 14:59:54.637237 1 logs_generator.go:76] 46 GET /api/v1/namespaces/ns/pods/g66 487\nI0819 14:59:54.837223 1 logs_generator.go:76] 47 POST /api/v1/namespaces/ns/pods/nhv 497\nI0819 14:59:55.037245 1 logs_generator.go:76] 48 GET /api/v1/namespaces/kube-system/pods/5jgx 535\nI0819 14:59:55.237198 1 logs_generator.go:76] 49 POST /api/v1/namespaces/default/pods/8hr 461\nI0819 14:59:55.437174 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/ns/pods/grgr 496\nI0819 14:59:55.637214 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/ns/pods/f85f 437\nI0819 14:59:55.838071 1 logs_generator.go:76] 52 POST /api/v1/namespaces/ns/pods/6b4s 558\nI0819 14:59:56.037697 1 logs_generator.go:76] 53 GET /api/v1/namespaces/ns/pods/lx92 237\nI0819 14:59:56.237160 1 logs_generator.go:76] 54 GET /api/v1/namespaces/default/pods/46hs 334\nI0819 14:59:56.437262 1 logs_generator.go:76] 55 POST /api/v1/namespaces/ns/pods/7f5 225\nI0819 14:59:56.637384 1 logs_generator.go:76] 56 GET /api/v1/namespaces/kube-system/pods/mwf 302\nI0819 14:59:56.837230 1 logs_generator.go:76] 57 PUT /api/v1/namespaces/default/pods/vv4s 259\nI0819 14:59:57.037320 1 logs_generator.go:76] 58 GET /api/v1/namespaces/kube-system/pods/8mv 533\nI0819 14:59:57.237237 1 logs_generator.go:76] 59 POST /api/v1/namespaces/default/pods/h9jz 258\nI0819 14:59:57.437248 1 logs_generator.go:76] 60 POST /api/v1/namespaces/default/pods/2db 320\nI0819 14:59:57.637257 1 logs_generator.go:76] 61 PUT /api/v1/namespaces/kube-system/pods/qtj 481\nI0819 14:59:57.837277 1 logs_generator.go:76] 62 POST /api/v1/namespaces/ns/pods/ms4 597\nI0819 14:59:58.037196 1 logs_generator.go:76] 63 GET /api/v1/namespaces/ns/pods/6l7 461\nI0819 14:59:58.237304 1 logs_generator.go:76] 64 POST /api/v1/namespaces/default/pods/rrgz 587\nI0819 14:59:58.437242 1 logs_generator.go:76] 65 GET /api/v1/namespaces/ns/pods/gwjz 285\nI0819 14:59:58.637158 1 logs_generator.go:76] 66 PUT /api/v1/namespaces/ns/pods/f49r 576\nI0819 14:59:58.837206 1 logs_generator.go:76] 67 POST /api/v1/namespaces/default/pods/s97l 312\nI0819 14:59:59.037253 1 logs_generator.go:76] 68 GET /api/v1/namespaces/default/pods/hgkp 467\nI0819 14:59:59.237212 1 logs_generator.go:76] 69 POST /api/v1/namespaces/default/pods/68c 416\nI0819 14:59:59.437207 1 logs_generator.go:76] 70 GET /api/v1/namespaces/kube-system/pods/jmxt 594\nI0819 14:59:59.637227 1 logs_generator.go:76] 71 GET /api/v1/namespaces/kube-system/pods/rm4 429\nI0819 14:59:59.837222 1 logs_generator.go:76] 72 POST /api/v1/namespaces/kube-system/pods/wvp 217\nI0819 15:00:00.037264 1 logs_generator.go:76] 73 GET /api/v1/namespaces/ns/pods/t79q 498\nI0819 15:00:00.237224 1 logs_generator.go:76] 74 POST /api/v1/namespaces/kube-system/pods/qgc 531\nI0819 15:00:00.437208 1 logs_generator.go:76] 75 POST /api/v1/namespaces/ns/pods/fbxj 396\nI0819 15:00:00.637231 1 logs_generator.go:76] 76 POST /api/v1/namespaces/default/pods/pzb 522\nI0819 15:00:00.837241 1 logs_generator.go:76] 77 GET /api/v1/namespaces/kube-system/pods/lsfs 428\nI0819 15:00:01.037217 1 logs_generator.go:76] 78 POST /api/v1/namespaces/default/pods/rns 469\nI0819 15:00:01.237222 1 logs_generator.go:76] 79 GET /api/v1/namespaces/default/pods/jgth 442\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Aug 19 15:00:01.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7030' Aug 19 15:00:06.374: INFO: stderr: "" Aug 19 15:00:06.374: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:00:06.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7030" for this suite. • [SLOW TEST:39.629 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":153,"skipped":2300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:00:06.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-592d2353-9af7-4b80-8463-0ddad4800039 STEP: Creating a pod to test consume configMaps Aug 19 15:00:07.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b" in namespace "configmap-495" to be "Succeeded or Failed" Aug 19 15:00:07.195: INFO: Pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b": Phase="Pending", Reason="", readiness=false. Elapsed: 174.199761ms Aug 19 15:00:09.241: INFO: Pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220898227s Aug 19 15:00:11.289: INFO: Pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268695775s Aug 19 15:00:13.296: INFO: Pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.276057211s STEP: Saw pod success Aug 19 15:00:13.297: INFO: Pod "pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b" satisfied condition "Succeeded or Failed" Aug 19 15:00:13.302: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b container configmap-volume-test: STEP: delete the pod Aug 19 15:00:13.687: INFO: Waiting for pod pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b to disappear Aug 19 15:00:13.912: INFO: Pod pod-configmaps-280bce30-d4de-40b4-b6a2-2ed0d6ab110b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:00:13.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-495" for this suite. • [SLOW TEST:7.517 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2356,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:00:13.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 19 15:00:14.102: INFO: Waiting up to 5m0s for pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48" in namespace "emptydir-1794" to be "Succeeded or Failed" Aug 19 15:00:14.118: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48": Phase="Pending", Reason="", readiness=false. Elapsed: 15.685093ms Aug 19 15:00:16.126: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023637971s Aug 19 15:00:18.170: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068033785s Aug 19 15:00:20.297: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195069612s Aug 19 15:00:23.069: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.966533042s STEP: Saw pod success Aug 19 15:00:23.069: INFO: Pod "pod-8baec09e-f025-4266-a80a-a04e1b4c3d48" satisfied condition "Succeeded or Failed" Aug 19 15:00:23.162: INFO: Trying to get logs from node latest-worker2 pod pod-8baec09e-f025-4266-a80a-a04e1b4c3d48 container test-container: STEP: delete the pod Aug 19 15:00:25.799: INFO: Waiting for pod pod-8baec09e-f025-4266-a80a-a04e1b4c3d48 to disappear Aug 19 15:00:26.170: INFO: Pod pod-8baec09e-f025-4266-a80a-a04e1b4c3d48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:00:26.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1794" for this suite. • [SLOW TEST:13.142 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2357,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:00:27.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-7346 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7346 to expose endpoints map[] Aug 19 15:00:31.410: INFO: successfully validated that service multi-endpoint-test in namespace services-7346 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7346 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7346 to expose endpoints map[pod1:[100]] Aug 19 15:00:37.182: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Aug 19 15:00:41.069: INFO: successfully validated that service multi-endpoint-test in namespace services-7346 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7346 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7346 to expose endpoints map[pod1:[100] pod2:[101]] Aug 19 15:00:45.310: INFO: Unexpected endpoints: found map[ad2aea92-0839-46c8-b17d-a1fa4556ab02:[100]], expected map[pod1:[100] pod2:[101]], will retry Aug 19 15:00:46.343: INFO: successfully validated that service multi-endpoint-test in namespace services-7346 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7346 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7346 to expose endpoints map[pod2:[101]] Aug 19 15:00:47.046: INFO: successfully validated that service multi-endpoint-test in namespace services-7346 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7346 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7346 to expose endpoints map[] Aug 19 15:00:47.221: INFO: successfully validated that service multi-endpoint-test in namespace services-7346 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:00:48.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7346" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.822 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":156,"skipped":2369,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:00:48.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 19 15:00:49.950: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:50.048: INFO: Number of nodes with available pods: 0 Aug 19 15:00:50.048: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:51.061: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:51.069: INFO: Number of nodes with available pods: 0 Aug 19 15:00:51.069: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:52.059: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:52.064: INFO: Number of nodes with available pods: 0 Aug 19 15:00:52.064: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:53.153: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:53.382: INFO: Number of nodes with available pods: 0 Aug 19 15:00:53.382: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:54.516: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:54.879: INFO: Number of nodes with available pods: 0 Aug 19 15:00:54.879: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:55.331: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:55.518: INFO: Number of nodes with available pods: 0 Aug 19 15:00:55.518: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:56.069: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:56.076: INFO: Number of nodes with available pods: 0 Aug 19 15:00:56.077: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:57.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:57.103: INFO: Number of nodes with available pods: 2 Aug 19 15:00:57.104: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 19 15:00:57.799: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:58.238: INFO: Number of nodes with available pods: 1 Aug 19 15:00:58.238: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:00:59.250: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:00:59.258: INFO: Number of nodes with available pods: 1 Aug 19 15:00:59.258: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:00.246: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:00.251: INFO: Number of nodes with available pods: 1 Aug 19 15:01:00.251: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:01.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:01.348: INFO: Number of nodes with available pods: 1 Aug 19 15:01:01.348: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:02.251: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:02.258: INFO: Number of nodes with available pods: 1 Aug 19 15:01:02.258: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:03.354: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:03.765: INFO: Number of nodes with available pods: 1 Aug 19 15:01:03.765: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:04.497: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:04.509: INFO: Number of nodes with available pods: 1 Aug 19 15:01:04.509: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:05.331: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:05.549: INFO: Number of nodes with available pods: 1 Aug 19 15:01:05.549: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:01:06.457: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:01:06.960: INFO: Number of nodes with available pods: 2 Aug 19 15:01:06.960: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8704, will wait for the garbage collector to delete the pods Aug 19 15:01:07.032: INFO: Deleting DaemonSet.extensions daemon-set took: 6.820421ms Aug 19 15:01:07.132: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.64744ms Aug 19 15:01:20.104: INFO: Number of nodes with available pods: 0 Aug 19 15:01:20.104: INFO: Number of running nodes: 0, number of available pods: 0 Aug 19 15:01:20.131: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8704/daemonsets","resourceVersion":"1519608"},"items":null} Aug 19 15:01:20.137: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8704/pods","resourceVersion":"1519608"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:01:20.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8704" for this suite. • [SLOW TEST:31.269 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":157,"skipped":2374,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:01:20.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 19 15:01:20.502: INFO: Waiting up to 5m0s for pod "downward-api-5537e970-6c67-4189-8518-df459af90deb" in namespace "downward-api-4613" to be "Succeeded or Failed" Aug 19 15:01:20.710: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Pending", Reason="", readiness=false. Elapsed: 208.189185ms Aug 19 15:01:22.720: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217690721s Aug 19 15:01:24.727: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22536247s Aug 19 15:01:26.874: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.371583506s Aug 19 15:01:29.098: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.596142902s Aug 19 15:01:31.224: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.722506968s STEP: Saw pod success Aug 19 15:01:31.225: INFO: Pod "downward-api-5537e970-6c67-4189-8518-df459af90deb" satisfied condition "Succeeded or Failed" Aug 19 15:01:31.264: INFO: Trying to get logs from node latest-worker pod downward-api-5537e970-6c67-4189-8518-df459af90deb container dapi-container: STEP: delete the pod Aug 19 15:01:32.697: INFO: Waiting for pod downward-api-5537e970-6c67-4189-8518-df459af90deb to disappear Aug 19 15:01:33.099: INFO: Pod downward-api-5537e970-6c67-4189-8518-df459af90deb no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:01:33.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4613" for this suite. • [SLOW TEST:13.115 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2376,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:01:33.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:01:46.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1627" for this suite. • [SLOW TEST:12.766 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":159,"skipped":2380,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:01:46.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0819 15:02:28.132767 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 15:03:30.447: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 19 15:03:30.447: INFO: Deleting pod "simpletest.rc-26hth" in namespace "gc-1204" Aug 19 15:03:30.483: INFO: Deleting pod "simpletest.rc-5phm8" in namespace "gc-1204" Aug 19 15:03:30.672: INFO: Deleting pod "simpletest.rc-82kbv" in namespace "gc-1204" Aug 19 15:03:31.570: INFO: Deleting pod "simpletest.rc-bcj8t" in namespace "gc-1204" Aug 19 15:03:32.558: INFO: Deleting pod "simpletest.rc-fw7jm" in namespace "gc-1204" Aug 19 15:03:33.545: INFO: Deleting pod "simpletest.rc-hcmgw" in namespace "gc-1204" Aug 19 15:03:33.878: INFO: Deleting pod "simpletest.rc-k4vn6" in namespace "gc-1204" Aug 19 15:03:34.348: INFO: Deleting pod "simpletest.rc-lk2c6" in namespace "gc-1204" Aug 19 15:03:35.210: INFO: Deleting pod "simpletest.rc-lp7zq" in namespace "gc-1204" Aug 19 15:03:35.615: INFO: Deleting pod "simpletest.rc-vjn5l" in namespace "gc-1204" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:03:35.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1204" for this suite. • [SLOW TEST:109.762 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":160,"skipped":2388,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:03:35.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 19 15:03:36.556: INFO: Waiting up to 5m0s for pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257" in namespace "emptydir-8937" to be "Succeeded or Failed" Aug 19 15:03:36.918: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Pending", Reason="", readiness=false. Elapsed: 361.761657ms Aug 19 15:03:38.924: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.3677054s Aug 19 15:03:41.322: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.765283083s Aug 19 15:03:43.374: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817473832s Aug 19 15:03:45.382: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Running", Reason="", readiness=true. Elapsed: 8.825455218s Aug 19 15:03:47.586: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.029428671s STEP: Saw pod success Aug 19 15:03:47.586: INFO: Pod "pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257" satisfied condition "Succeeded or Failed" Aug 19 15:03:47.590: INFO: Trying to get logs from node latest-worker pod pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257 container test-container: STEP: delete the pod Aug 19 15:03:48.383: INFO: Waiting for pod pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257 to disappear Aug 19 15:03:48.412: INFO: Pod pod-4ebc0457-48d6-4c44-81e3-7c07fc3d8257 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:03:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8937" for this suite. • [SLOW TEST:12.722 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2397,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:03:48.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 19 15:03:48.750: INFO: Waiting up to 5m0s for pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3" in namespace "emptydir-5889" to be "Succeeded or Failed" Aug 19 15:03:48.826: INFO: Pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 75.952496ms Aug 19 15:03:50.872: INFO: Pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122056693s Aug 19 15:03:52.951: INFO: Pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201412175s Aug 19 15:03:54.957: INFO: Pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207637905s STEP: Saw pod success Aug 19 15:03:54.958: INFO: Pod "pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3" satisfied condition "Succeeded or Failed" Aug 19 15:03:54.970: INFO: Trying to get logs from node latest-worker2 pod pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3 container test-container: STEP: delete the pod Aug 19 15:03:55.030: INFO: Waiting for pod pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3 to disappear Aug 19 15:03:55.038: INFO: Pod pod-8732670b-7a63-4f9d-a3ff-ab4a4003c8a3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:03:55.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5889" for this suite. • [SLOW TEST:6.516 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2406,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:03:55.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-97d5ae5e-8f92-46f5-803e-1c68015b76d1 STEP: Creating a pod to test consume configMaps Aug 19 15:03:55.289: INFO: Waiting up to 5m0s for pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646" in namespace "configmap-7271" to be "Succeeded or Failed" Aug 19 15:03:55.296: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332523ms Aug 19 15:03:57.598: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308866373s Aug 19 15:03:59.682: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392402712s Aug 19 15:04:01.689: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399471s Aug 19 15:04:03.696: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.406678374s STEP: Saw pod success Aug 19 15:04:03.697: INFO: Pod "pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646" satisfied condition "Succeeded or Failed" Aug 19 15:04:03.701: INFO: Trying to get logs from node latest-worker pod pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646 container configmap-volume-test: STEP: delete the pod Aug 19 15:04:03.789: INFO: Waiting for pod pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646 to disappear Aug 19 15:04:03.860: INFO: Pod pod-configmaps-40a446ab-6268-4da0-bc51-8e6152cd8646 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:04:03.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7271" for this suite. • [SLOW TEST:8.822 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:04:03.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7952 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 19 15:04:04.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 19 15:04:05.048: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:04:07.056: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:04:09.376: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:04:11.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:04:13.125: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:04:15.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:04:17.278: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:04:19.054: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:04:21.054: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 19 15:04:21.064: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 15:04:23.070: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 15:04:25.071: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 15:04:27.072: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 19 15:04:31.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.65:8080/dial?request=hostname&protocol=http&host=10.244.2.61&port=8080&tries=1'] Namespace:pod-network-test-7952 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:04:31.146: INFO: >>> kubeConfig: /root/.kube/config I0819 15:04:31.245427 10 log.go:181] (0x40039ea4d0) (0x4003e01540) Create stream I0819 15:04:31.245597 10 log.go:181] (0x40039ea4d0) (0x4003e01540) Stream added, broadcasting: 1 I0819 15:04:31.251738 10 log.go:181] (0x40039ea4d0) Reply frame received for 1 I0819 15:04:31.251922 10 log.go:181] (0x40039ea4d0) (0x40003c9e00) Create stream I0819 15:04:31.252007 10 log.go:181] (0x40039ea4d0) (0x40003c9e00) Stream added, broadcasting: 3 I0819 15:04:31.253784 10 log.go:181] (0x40039ea4d0) Reply frame received for 3 I0819 15:04:31.253914 10 log.go:181] (0x40039ea4d0) (0x4003e015e0) Create stream I0819 15:04:31.253995 10 log.go:181] (0x40039ea4d0) (0x4003e015e0) Stream added, broadcasting: 5 I0819 15:04:31.255212 10 log.go:181] (0x40039ea4d0) Reply frame received for 5 I0819 15:04:31.346992 10 log.go:181] (0x40039ea4d0) Data frame received for 3 I0819 15:04:31.347130 10 log.go:181] (0x40003c9e00) (3) Data frame handling I0819 15:04:31.347235 10 log.go:181] (0x40003c9e00) (3) Data frame sent I0819 15:04:31.347620 10 log.go:181] (0x40039ea4d0) Data frame received for 5 I0819 15:04:31.347710 10 log.go:181] (0x4003e015e0) (5) Data frame handling I0819 15:04:31.347916 10 log.go:181] (0x40039ea4d0) Data frame received for 3 I0819 15:04:31.348034 10 log.go:181] (0x40003c9e00) (3) Data frame handling I0819 15:04:31.349101 10 log.go:181] (0x40039ea4d0) Data frame received for 1 I0819 15:04:31.349215 10 log.go:181] (0x4003e01540) (1) Data frame handling I0819 15:04:31.349327 10 log.go:181] (0x4003e01540) (1) Data frame sent I0819 15:04:31.349477 10 log.go:181] (0x40039ea4d0) (0x4003e01540) Stream removed, broadcasting: 1 I0819 15:04:31.349634 10 log.go:181] (0x40039ea4d0) Go away received I0819 15:04:31.349904 10 log.go:181] (0x40039ea4d0) (0x4003e01540) Stream removed, broadcasting: 1 I0819 15:04:31.350015 10 log.go:181] (0x40039ea4d0) (0x40003c9e00) Stream removed, broadcasting: 3 I0819 15:04:31.350123 10 log.go:181] (0x40039ea4d0) (0x4003e015e0) Stream removed, broadcasting: 5 Aug 19 15:04:31.350: INFO: Waiting for responses: map[] Aug 19 15:04:31.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.65:8080/dial?request=hostname&protocol=http&host=10.244.1.64&port=8080&tries=1'] Namespace:pod-network-test-7952 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:04:31.356: INFO: >>> kubeConfig: /root/.kube/config I0819 15:04:31.415434 10 log.go:181] (0x4000850160) (0x400064b5e0) Create stream I0819 15:04:31.415581 10 log.go:181] (0x4000850160) (0x400064b5e0) Stream added, broadcasting: 1 I0819 15:04:31.419269 10 log.go:181] (0x4000850160) Reply frame received for 1 I0819 15:04:31.419451 10 log.go:181] (0x4000850160) (0x4003ca40a0) Create stream I0819 15:04:31.419538 10 log.go:181] (0x4000850160) (0x4003ca40a0) Stream added, broadcasting: 3 I0819 15:04:31.421255 10 log.go:181] (0x4000850160) Reply frame received for 3 I0819 15:04:31.421461 10 log.go:181] (0x4000850160) (0x400064b680) Create stream I0819 15:04:31.421610 10 log.go:181] (0x4000850160) (0x400064b680) Stream added, broadcasting: 5 I0819 15:04:31.423138 10 log.go:181] (0x4000850160) Reply frame received for 5 I0819 15:04:31.492014 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 15:04:31.492228 10 log.go:181] (0x4003ca40a0) (3) Data frame handling I0819 15:04:31.492411 10 log.go:181] (0x4003ca40a0) (3) Data frame sent I0819 15:04:31.492696 10 log.go:181] (0x4000850160) Data frame received for 5 I0819 15:04:31.492947 10 log.go:181] (0x400064b680) (5) Data frame handling I0819 15:04:31.493086 10 log.go:181] (0x4000850160) Data frame received for 3 I0819 15:04:31.493230 10 log.go:181] (0x4003ca40a0) (3) Data frame handling I0819 15:04:31.494294 10 log.go:181] (0x4000850160) Data frame received for 1 I0819 15:04:31.494468 10 log.go:181] (0x400064b5e0) (1) Data frame handling I0819 15:04:31.494622 10 log.go:181] (0x400064b5e0) (1) Data frame sent I0819 15:04:31.494777 10 log.go:181] (0x4000850160) (0x400064b5e0) Stream removed, broadcasting: 1 I0819 15:04:31.494954 10 log.go:181] (0x4000850160) Go away received I0819 15:04:31.495299 10 log.go:181] (0x4000850160) (0x400064b5e0) Stream removed, broadcasting: 1 I0819 15:04:31.495482 10 log.go:181] (0x4000850160) (0x4003ca40a0) Stream removed, broadcasting: 3 I0819 15:04:31.495647 10 log.go:181] (0x4000850160) (0x400064b680) Stream removed, broadcasting: 5 Aug 19 15:04:31.495: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:04:31.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7952" for this suite. • [SLOW TEST:27.634 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2449,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:04:31.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:04:31.687: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 19 15:04:31.720: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:31.732: INFO: Number of nodes with available pods: 0 Aug 19 15:04:31.732: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:32.744: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:32.751: INFO: Number of nodes with available pods: 0 Aug 19 15:04:32.751: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:33.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:33.884: INFO: Number of nodes with available pods: 0 Aug 19 15:04:33.884: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:34.746: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:34.753: INFO: Number of nodes with available pods: 0 Aug 19 15:04:34.753: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:35.894: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:36.198: INFO: Number of nodes with available pods: 0 Aug 19 15:04:36.198: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:36.752: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:36.938: INFO: Number of nodes with available pods: 1 Aug 19 15:04:36.938: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:04:37.915: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:38.043: INFO: Number of nodes with available pods: 2 Aug 19 15:04:38.043: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 19 15:04:38.664: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:38.664: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:38.742: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:39.959: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:39.959: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:40.093: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:40.778: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:40.779: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:40.831: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:41.752: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:41.752: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:41.752: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:41.763: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:42.752: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:42.752: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:42.752: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:42.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:43.750: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:43.750: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:43.750: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:43.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:44.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:44.751: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:44.751: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:44.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:45.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:45.751: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:45.751: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:45.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:46.752: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:46.752: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:46.752: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:46.763: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:47.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:47.751: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:47.751: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:47.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:48.752: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:48.752: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:48.752: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:48.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:50.085: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:50.085: INFO: Wrong image for pod: daemon-set-wsw4h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:50.085: INFO: Pod daemon-set-wsw4h is not available Aug 19 15:04:50.105: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:50.750: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:50.750: INFO: Pod daemon-set-xgf5j is not available Aug 19 15:04:50.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:51.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:51.751: INFO: Pod daemon-set-xgf5j is not available Aug 19 15:04:51.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:52.924: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:52.925: INFO: Pod daemon-set-xgf5j is not available Aug 19 15:04:52.984: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:53.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:53.751: INFO: Pod daemon-set-xgf5j is not available Aug 19 15:04:53.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:54.750: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:54.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:55.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:55.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:56.750: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:56.750: INFO: Pod daemon-set-dmr8s is not available Aug 19 15:04:56.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:57.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:57.751: INFO: Pod daemon-set-dmr8s is not available Aug 19 15:04:57.761: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:58.750: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:58.750: INFO: Pod daemon-set-dmr8s is not available Aug 19 15:04:58.763: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:04:59.751: INFO: Wrong image for pod: daemon-set-dmr8s. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 19 15:04:59.751: INFO: Pod daemon-set-dmr8s is not available Aug 19 15:04:59.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:05:00.753: INFO: Pod daemon-set-xz46k is not available Aug 19 15:05:00.762: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 19 15:05:00.771: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:05:00.778: INFO: Number of nodes with available pods: 1 Aug 19 15:05:00.778: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:05:01.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:05:01.797: INFO: Number of nodes with available pods: 1 Aug 19 15:05:01.797: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:05:02.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:05:02.798: INFO: Number of nodes with available pods: 1 Aug 19 15:05:02.798: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:05:03.791: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:05:03.799: INFO: Number of nodes with available pods: 2 Aug 19 15:05:03.799: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9997, will wait for the garbage collector to delete the pods Aug 19 15:05:03.891: INFO: Deleting DaemonSet.extensions daemon-set took: 8.936277ms Aug 19 15:05:04.292: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.623575ms Aug 19 15:05:10.097: INFO: Number of nodes with available pods: 0 Aug 19 15:05:10.097: INFO: Number of running nodes: 0, number of available pods: 0 Aug 19 15:05:10.101: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9997/daemonsets","resourceVersion":"1520722"},"items":null} Aug 19 15:05:10.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9997/pods","resourceVersion":"1520722"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:10.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9997" for this suite. • [SLOW TEST:38.624 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":165,"skipped":2455,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:10.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:10.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9751" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":166,"skipped":2470,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:10.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:05:10.452: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"eece778e-bbed-4c1d-b3e1-fd4d40c9bb4f", Controller:(*bool)(0x4000d87fba), BlockOwnerDeletion:(*bool)(0x4000d87fbb)}} Aug 19 15:05:10.506: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6503f961-5f3d-400f-9920-16d5c6919ae9", Controller:(*bool)(0x40018c012a), BlockOwnerDeletion:(*bool)(0x40018c012b)}} Aug 19 15:05:10.538: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3fd1709e-4f4b-4f5e-acb9-5eda2486f681", Controller:(*bool)(0x40014468ea), BlockOwnerDeletion:(*bool)(0x40014468eb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:15.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5453" for this suite. • [SLOW TEST:5.403 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":167,"skipped":2476,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:15.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:05:15.771: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b" in namespace "security-context-test-9818" to be "Succeeded or Failed" Aug 19 15:05:15.784: INFO: Pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.359098ms Aug 19 15:05:17.885: INFO: Pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113871422s Aug 19 15:05:19.892: INFO: Pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b": Phase="Running", Reason="", readiness=true. Elapsed: 4.120532398s Aug 19 15:05:21.904: INFO: Pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132521626s Aug 19 15:05:21.904: INFO: Pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b" satisfied condition "Succeeded or Failed" Aug 19 15:05:21.913: INFO: Got logs for pod "busybox-privileged-false-9f6bc088-3e77-46b9-8e24-bac7d949a09b": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:21.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9818" for this suite. • [SLOW TEST:6.303 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2484,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:21.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-03df968b-d1e4-4337-85bf-504c991385dc STEP: Creating a pod to test consume secrets Aug 19 15:05:22.096: INFO: Waiting up to 5m0s for pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84" in namespace "secrets-9103" to be "Succeeded or Failed" Aug 19 15:05:22.118: INFO: Pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84": Phase="Pending", Reason="", readiness=false. Elapsed: 21.97164ms Aug 19 15:05:24.198: INFO: Pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101504615s Aug 19 15:05:26.205: INFO: Pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84": Phase="Running", Reason="", readiness=true. Elapsed: 4.108589995s Aug 19 15:05:28.213: INFO: Pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.116592862s STEP: Saw pod success Aug 19 15:05:28.213: INFO: Pod "pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84" satisfied condition "Succeeded or Failed" Aug 19 15:05:28.219: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84 container secret-volume-test: STEP: delete the pod Aug 19 15:05:28.278: INFO: Waiting for pod pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84 to disappear Aug 19 15:05:28.305: INFO: Pod pod-secrets-094cb4d8-9e83-42a1-b4e2-1f2c6b53da84 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:28.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9103" for this suite. • [SLOW TEST:6.389 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:28.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-12 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-12 STEP: creating replication controller externalsvc in namespace services-12 I0819 15:05:29.487769 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-12, replica count: 2 I0819 15:05:32.539179 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:05:35.539702 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 19 15:05:36.049: INFO: Creating new exec pod Aug 19 15:05:46.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-12 execpod9wkns -- /bin/sh -x -c nslookup clusterip-service.services-12.svc.cluster.local' Aug 19 15:05:47.903: INFO: stderr: "I0819 15:05:47.775767 2586 log.go:181] (0x40009a0160) (0x40002341e0) Create stream\nI0819 15:05:47.778580 2586 log.go:181] (0x40009a0160) (0x40002341e0) Stream added, broadcasting: 1\nI0819 15:05:47.802130 2586 log.go:181] (0x40009a0160) Reply frame received for 1\nI0819 15:05:47.802771 2586 log.go:181] (0x40009a0160) (0x4000adc000) Create stream\nI0819 15:05:47.802837 2586 log.go:181] (0x40009a0160) (0x4000adc000) Stream added, broadcasting: 3\nI0819 15:05:47.804357 2586 log.go:181] (0x40009a0160) Reply frame received for 3\nI0819 15:05:47.804650 2586 log.go:181] (0x40009a0160) (0x4000a45ea0) Create stream\nI0819 15:05:47.804705 2586 log.go:181] (0x40009a0160) (0x4000a45ea0) Stream added, broadcasting: 5\nI0819 15:05:47.805768 2586 log.go:181] (0x40009a0160) Reply frame received for 5\nI0819 15:05:47.871820 2586 log.go:181] (0x40009a0160) Data frame received for 5\nI0819 15:05:47.872062 2586 log.go:181] (0x4000a45ea0) (5) Data frame handling\nI0819 15:05:47.872605 2586 log.go:181] (0x4000a45ea0) (5) Data frame sent\n+ nslookup clusterip-service.services-12.svc.cluster.local\nI0819 15:05:47.880359 2586 log.go:181] (0x40009a0160) Data frame received for 3\nI0819 15:05:47.880443 2586 log.go:181] (0x4000adc000) (3) Data frame handling\nI0819 15:05:47.880522 2586 log.go:181] (0x4000adc000) (3) Data frame sent\nI0819 15:05:47.881528 2586 log.go:181] (0x40009a0160) Data frame received for 3\nI0819 15:05:47.881632 2586 log.go:181] (0x4000adc000) (3) Data frame handling\nI0819 15:05:47.881765 2586 log.go:181] (0x4000adc000) (3) Data frame sent\nI0819 15:05:47.881900 2586 log.go:181] (0x40009a0160) Data frame received for 3\nI0819 15:05:47.882091 2586 log.go:181] (0x40009a0160) Data frame received for 5\nI0819 15:05:47.882245 2586 log.go:181] (0x4000a45ea0) (5) Data frame handling\nI0819 15:05:47.882376 2586 log.go:181] (0x4000adc000) (3) Data frame handling\nI0819 15:05:47.883997 2586 log.go:181] (0x40009a0160) Data frame received for 1\nI0819 15:05:47.884073 2586 log.go:181] (0x40002341e0) (1) Data frame handling\nI0819 15:05:47.884148 2586 log.go:181] (0x40002341e0) (1) Data frame sent\nI0819 15:05:47.885709 2586 log.go:181] (0x40009a0160) (0x40002341e0) Stream removed, broadcasting: 1\nI0819 15:05:47.888223 2586 log.go:181] (0x40009a0160) Go away received\nI0819 15:05:47.892360 2586 log.go:181] (0x40009a0160) (0x40002341e0) Stream removed, broadcasting: 1\nI0819 15:05:47.892990 2586 log.go:181] (0x40009a0160) (0x4000adc000) Stream removed, broadcasting: 3\nI0819 15:05:47.893277 2586 log.go:181] (0x40009a0160) (0x4000a45ea0) Stream removed, broadcasting: 5\n" Aug 19 15:05:47.904: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-12.svc.cluster.local\tcanonical name = externalsvc.services-12.svc.cluster.local.\nName:\texternalsvc.services-12.svc.cluster.local\nAddress: 10.103.148.95\n\n" STEP: deleting ReplicationController externalsvc in namespace services-12, will wait for the garbage collector to delete the pods Aug 19 15:05:47.973: INFO: Deleting ReplicationController externalsvc took: 6.713517ms Aug 19 15:05:48.374: INFO: Terminating ReplicationController externalsvc pods took: 400.736927ms Aug 19 15:05:59.729: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:05:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-12" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:31.452 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":170,"skipped":2535,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:05:59.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 19 15:05:59.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1037' Aug 19 15:06:01.319: INFO: stderr: "" Aug 19 15:06:01.319: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Aug 19 15:06:01.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1037' Aug 19 15:06:10.350: INFO: stderr: "" Aug 19 15:06:10.351: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:06:10.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1037" for this suite. • [SLOW TEST:10.775 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":171,"skipped":2543,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:06:10.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:06:10.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8" in namespace "downward-api-5793" to be "Succeeded or Failed" Aug 19 15:06:10.857: INFO: Pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8": Phase="Pending", Reason="", readiness=false. Elapsed: 52.610681ms Aug 19 15:06:13.105: INFO: Pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300497401s Aug 19 15:06:15.114: INFO: Pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308848409s Aug 19 15:06:17.122: INFO: Pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.316870118s STEP: Saw pod success Aug 19 15:06:17.122: INFO: Pod "downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8" satisfied condition "Succeeded or Failed" Aug 19 15:06:17.127: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8 container client-container: STEP: delete the pod Aug 19 15:06:17.171: INFO: Waiting for pod downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8 to disappear Aug 19 15:06:17.182: INFO: Pod downwardapi-volume-e6f8707f-b4c0-440a-9a1b-fd04f36f01b8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:06:17.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5793" for this suite. • [SLOW TEST:6.643 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2545,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:06:17.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 15:06:25.378: INFO: DNS probes using dns-test-5bf5cf37-15c3-4d56-9634-5a7937f274d9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 15:06:33.479: INFO: File wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:33.484: INFO: File jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:33.484: INFO: Lookups using dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db failed for: [wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local] Aug 19 15:06:38.492: INFO: File wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains '' instead of 'bar.example.com.' Aug 19 15:06:38.497: INFO: File jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:38.497: INFO: Lookups using dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db failed for: [wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local] Aug 19 15:06:43.569: INFO: File wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:43.705: INFO: File jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:43.705: INFO: Lookups using dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db failed for: [wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local] Aug 19 15:06:48.491: INFO: File wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:48.496: INFO: File jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local from pod dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 19 15:06:48.496: INFO: Lookups using dns-7046/dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db failed for: [wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local] Aug 19 15:06:53.598: INFO: DNS probes using dns-test-1f05eec9-3a44-43b5-b21c-363f77b1c7db succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7046.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7046.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 15:07:02.770: INFO: DNS probes using dns-test-8350b2de-d195-4a8d-8b78-47037ae96583 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:07:02.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7046" for this suite. • [SLOW TEST:45.721 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":173,"skipped":2546,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:07:02.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:07:03.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config version' Aug 19 15:07:04.942: INFO: stderr: "" Aug 19 15:07:04.942: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.4\", GitCommit:\"1afc53514032a44d091ae4a9f6e092171db9fe10\", GitTreeState:\"clean\", BuildDate:\"2020-08-04T14:29:10Z\", GoVersion:\"go1.15rc1\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:07:04.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8403" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":174,"skipped":2561,"failed":0} S ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:07:04.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:07:10.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1444" for this suite. • [SLOW TEST:6.041 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":175,"skipped":2562,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:07:11.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-52e9c6c6-29cd-4ca7-9035-bfd3f6f66565 STEP: Creating configMap with name cm-test-opt-upd-5071de0f-02ad-43b6-b955-3a00e4780b9b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-52e9c6c6-29cd-4ca7-9035-bfd3f6f66565 STEP: Updating configmap cm-test-opt-upd-5071de0f-02ad-43b6-b955-3a00e4780b9b STEP: Creating configMap with name cm-test-opt-create-b47f85f6-f61a-4556-829b-9b011e9f3b1e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:08:50.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7708" for this suite. • [SLOW TEST:99.054 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:08:50.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7684 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 19 15:08:50.186: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 19 15:08:50.287: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:08:52.431: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:08:54.293: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:08:56.306: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:08:58.403: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:09:00.306: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:09:02.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:09:04.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:09:06.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 19 15:09:08.295: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 19 15:09:08.307: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 15:09:11.190: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 19 15:09:12.314: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 19 15:09:18.431: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:8080/dial?request=hostname&protocol=udp&host=10.244.2.72&port=8081&tries=1'] Namespace:pod-network-test-7684 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:09:18.431: INFO: >>> kubeConfig: /root/.kube/config I0819 15:09:18.488456 10 log.go:181] (0x4003b96840) (0x40037ef180) Create stream I0819 15:09:18.488668 10 log.go:181] (0x4003b96840) (0x40037ef180) Stream added, broadcasting: 1 I0819 15:09:18.491994 10 log.go:181] (0x4003b96840) Reply frame received for 1 I0819 15:09:18.492100 10 log.go:181] (0x4003b96840) (0x4001c1bf40) Create stream I0819 15:09:18.492160 10 log.go:181] (0x4003b96840) (0x4001c1bf40) Stream added, broadcasting: 3 I0819 15:09:18.493274 10 log.go:181] (0x4003b96840) Reply frame received for 3 I0819 15:09:18.493360 10 log.go:181] (0x4003b96840) (0x40037ef220) Create stream I0819 15:09:18.493407 10 log.go:181] (0x4003b96840) (0x40037ef220) Stream added, broadcasting: 5 I0819 15:09:18.494258 10 log.go:181] (0x4003b96840) Reply frame received for 5 I0819 15:09:18.541605 10 log.go:181] (0x4003b96840) Data frame received for 3 I0819 15:09:18.541742 10 log.go:181] (0x4001c1bf40) (3) Data frame handling I0819 15:09:18.541818 10 log.go:181] (0x4001c1bf40) (3) Data frame sent I0819 15:09:18.541938 10 log.go:181] (0x4003b96840) Data frame received for 5 I0819 15:09:18.542110 10 log.go:181] (0x40037ef220) (5) Data frame handling I0819 15:09:18.542235 10 log.go:181] (0x4003b96840) Data frame received for 3 I0819 15:09:18.542407 10 log.go:181] (0x4001c1bf40) (3) Data frame handling I0819 15:09:18.543359 10 log.go:181] (0x4003b96840) Data frame received for 1 I0819 15:09:18.543469 10 log.go:181] (0x40037ef180) (1) Data frame handling I0819 15:09:18.543590 10 log.go:181] (0x40037ef180) (1) Data frame sent I0819 15:09:18.543690 10 log.go:181] (0x4003b96840) (0x40037ef180) Stream removed, broadcasting: 1 I0819 15:09:18.543813 10 log.go:181] (0x4003b96840) Go away received I0819 15:09:18.544153 10 log.go:181] (0x4003b96840) (0x40037ef180) Stream removed, broadcasting: 1 I0819 15:09:18.544322 10 log.go:181] (0x4003b96840) (0x4001c1bf40) Stream removed, broadcasting: 3 I0819 15:09:18.544443 10 log.go:181] (0x4003b96840) (0x40037ef220) Stream removed, broadcasting: 5 Aug 19 15:09:18.544: INFO: Waiting for responses: map[] Aug 19 15:09:18.550: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.73:8080/dial?request=hostname&protocol=udp&host=10.244.1.75&port=8081&tries=1'] Namespace:pod-network-test-7684 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:09:18.550: INFO: >>> kubeConfig: /root/.kube/config I0819 15:09:18.602054 10 log.go:181] (0x4003660160) (0x40025ef220) Create stream I0819 15:09:18.602171 10 log.go:181] (0x4003660160) (0x40025ef220) Stream added, broadcasting: 1 I0819 15:09:18.605585 10 log.go:181] (0x4003660160) Reply frame received for 1 I0819 15:09:18.605803 10 log.go:181] (0x4003660160) (0x40025ef2c0) Create stream I0819 15:09:18.605913 10 log.go:181] (0x4003660160) (0x40025ef2c0) Stream added, broadcasting: 3 I0819 15:09:18.607310 10 log.go:181] (0x4003660160) Reply frame received for 3 I0819 15:09:18.607491 10 log.go:181] (0x4003660160) (0x40025ef360) Create stream I0819 15:09:18.607592 10 log.go:181] (0x4003660160) (0x40025ef360) Stream added, broadcasting: 5 I0819 15:09:18.609082 10 log.go:181] (0x4003660160) Reply frame received for 5 I0819 15:09:18.659019 10 log.go:181] (0x4003660160) Data frame received for 3 I0819 15:09:18.659236 10 log.go:181] (0x40025ef2c0) (3) Data frame handling I0819 15:09:18.659386 10 log.go:181] (0x40025ef2c0) (3) Data frame sent I0819 15:09:18.659518 10 log.go:181] (0x4003660160) Data frame received for 3 I0819 15:09:18.659617 10 log.go:181] (0x40025ef2c0) (3) Data frame handling I0819 15:09:18.659752 10 log.go:181] (0x4003660160) Data frame received for 5 I0819 15:09:18.659939 10 log.go:181] (0x40025ef360) (5) Data frame handling I0819 15:09:18.661262 10 log.go:181] (0x4003660160) Data frame received for 1 I0819 15:09:18.661319 10 log.go:181] (0x40025ef220) (1) Data frame handling I0819 15:09:18.661377 10 log.go:181] (0x40025ef220) (1) Data frame sent I0819 15:09:18.661444 10 log.go:181] (0x4003660160) (0x40025ef220) Stream removed, broadcasting: 1 I0819 15:09:18.661513 10 log.go:181] (0x4003660160) Go away received I0819 15:09:18.661779 10 log.go:181] (0x4003660160) (0x40025ef220) Stream removed, broadcasting: 1 I0819 15:09:18.661958 10 log.go:181] (0x4003660160) (0x40025ef2c0) Stream removed, broadcasting: 3 I0819 15:09:18.662049 10 log.go:181] (0x4003660160) (0x40025ef360) Stream removed, broadcasting: 5 Aug 19 15:09:18.662: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:09:18.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7684" for this suite. • [SLOW TEST:28.617 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:09:18.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:09:32.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8841" for this suite. • [SLOW TEST:14.261 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":178,"skipped":2666,"failed":0} [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:09:32.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:09:34.472: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f" in namespace "security-context-test-685" to be "Succeeded or Failed" Aug 19 15:09:34.759: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Pending", Reason="", readiness=false. Elapsed: 287.480426ms Aug 19 15:09:36.766: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293869799s Aug 19 15:09:38.998: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.525910139s Aug 19 15:09:41.098: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625866019s Aug 19 15:09:43.465: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.993533144s Aug 19 15:09:45.473: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.000602314s Aug 19 15:09:45.473: INFO: Pod "busybox-user-65534-d55dade6-c69e-4394-aa43-ab5f9c87288f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:09:45.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-685" for this suite. • [SLOW TEST:12.548 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2666,"failed":0} SS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:09:45.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:09:45.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-6703" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":180,"skipped":2668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:09:45.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 19 15:09:46.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5929 /api/v1/namespaces/watch-5929/configmaps/e2e-watch-test-watch-closed 2aa78798-41ea-43f5-a40c-a92a023f81c7 1522063 0 2020-08-19 15:09:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-19 15:09:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:09:46.183: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5929 /api/v1/namespaces/watch-5929/configmaps/e2e-watch-test-watch-closed 2aa78798-41ea-43f5-a40c-a92a023f81c7 1522064 0 2020-08-19 15:09:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-19 15:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 19 15:09:46.411: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5929 /api/v1/namespaces/watch-5929/configmaps/e2e-watch-test-watch-closed 2aa78798-41ea-43f5-a40c-a92a023f81c7 1522065 0 2020-08-19 15:09:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-19 15:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:09:46.413: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5929 /api/v1/namespaces/watch-5929/configmaps/e2e-watch-test-watch-closed 2aa78798-41ea-43f5-a40c-a92a023f81c7 1522066 0 2020-08-19 15:09:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-19 15:09:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:09:46.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5929" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":181,"skipped":2693,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:09:46.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0819 15:09:49.508381 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 15:10:52.411: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:10:52.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5096" for this suite. • [SLOW TEST:65.996 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":182,"skipped":2700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:10:52.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:10:52.860: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 19 15:11:15.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 create -f -' Aug 19 15:11:28.138: INFO: stderr: "" Aug 19 15:11:28.138: INFO: stdout: "e2e-test-crd-publish-openapi-2133-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 19 15:11:28.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 delete e2e-test-crd-publish-openapi-2133-crds test-cr' Aug 19 15:11:29.709: INFO: stderr: "" Aug 19 15:11:29.709: INFO: stdout: "e2e-test-crd-publish-openapi-2133-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 19 15:11:29.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 apply -f -' Aug 19 15:11:32.877: INFO: stderr: "" Aug 19 15:11:32.877: INFO: stdout: "e2e-test-crd-publish-openapi-2133-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 19 15:11:32.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1930 delete e2e-test-crd-publish-openapi-2133-crds test-cr' Aug 19 15:11:34.404: INFO: stderr: "" Aug 19 15:11:34.404: INFO: stdout: "e2e-test-crd-publish-openapi-2133-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 19 15:11:34.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2133-crds' Aug 19 15:11:38.254: INFO: stderr: "" Aug 19 15:11:38.254: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2133-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:11:59.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1930" for this suite. • [SLOW TEST:67.392 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":183,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:11:59.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 19 15:12:01.980: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:01.998: INFO: Number of nodes with available pods: 0 Aug 19 15:12:01.998: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:03.122: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:03.128: INFO: Number of nodes with available pods: 0 Aug 19 15:12:03.128: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:04.152: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:04.206: INFO: Number of nodes with available pods: 0 Aug 19 15:12:04.206: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:05.019: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:05.024: INFO: Number of nodes with available pods: 0 Aug 19 15:12:05.024: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:06.058: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:06.189: INFO: Number of nodes with available pods: 0 Aug 19 15:12:06.189: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:07.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:07.146: INFO: Number of nodes with available pods: 0 Aug 19 15:12:07.146: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:12:08.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:08.012: INFO: Number of nodes with available pods: 2 Aug 19 15:12:08.013: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 19 15:12:08.459: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:08.669: INFO: Number of nodes with available pods: 1 Aug 19 15:12:08.669: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:09.680: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:09.686: INFO: Number of nodes with available pods: 1 Aug 19 15:12:09.686: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:10.683: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:11.035: INFO: Number of nodes with available pods: 1 Aug 19 15:12:11.035: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:11.676: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:11.681: INFO: Number of nodes with available pods: 1 Aug 19 15:12:11.682: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:12.737: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:12.743: INFO: Number of nodes with available pods: 1 Aug 19 15:12:12.743: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:13.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:13.683: INFO: Number of nodes with available pods: 1 Aug 19 15:12:13.683: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:14.680: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:14.686: INFO: Number of nodes with available pods: 1 Aug 19 15:12:14.686: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:15.690: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:15.697: INFO: Number of nodes with available pods: 1 Aug 19 15:12:15.697: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 15:12:16.677: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:12:16.681: INFO: Number of nodes with available pods: 2 Aug 19 15:12:16.681: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9124, will wait for the garbage collector to delete the pods Aug 19 15:12:16.743: INFO: Deleting DaemonSet.extensions daemon-set took: 5.969349ms Aug 19 15:12:17.144: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.514713ms Aug 19 15:12:30.193: INFO: Number of nodes with available pods: 0 Aug 19 15:12:30.194: INFO: Number of running nodes: 0, number of available pods: 0 Aug 19 15:12:30.198: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9124/daemonsets","resourceVersion":"1522665"},"items":null} Aug 19 15:12:30.202: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9124/pods","resourceVersion":"1522665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:12:30.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9124" for this suite. • [SLOW TEST:30.792 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":184,"skipped":2751,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:12:30.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:12:38.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2229" for this suite. STEP: Destroying namespace "nsdeletetest-4847" for this suite. Aug 19 15:12:38.878: INFO: Namespace nsdeletetest-4847 was already deleted STEP: Destroying namespace "nsdeletetest-4175" for this suite. • [SLOW TEST:8.265 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":185,"skipped":2790,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:12:38.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 19 15:12:38.940: INFO: >>> kubeConfig: /root/.kube/config Aug 19 15:12:50.584: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:14:07.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-465" for this suite. • [SLOW TEST:88.412 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":186,"skipped":2796,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:14:07.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 19 15:14:08.301: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 19 15:14:08.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:11.973: INFO: stderr: "" Aug 19 15:14:11.973: INFO: stdout: "service/agnhost-replica created\n" Aug 19 15:14:11.974: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 19 15:14:11.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:16.034: INFO: stderr: "" Aug 19 15:14:16.034: INFO: stdout: "service/agnhost-primary created\n" Aug 19 15:14:16.035: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 19 15:14:16.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:19.629: INFO: stderr: "" Aug 19 15:14:19.630: INFO: stdout: "service/frontend created\n" Aug 19 15:14:19.631: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 19 15:14:19.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:23.182: INFO: stderr: "" Aug 19 15:14:23.182: INFO: stdout: "deployment.apps/frontend created\n" Aug 19 15:14:23.183: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 19 15:14:23.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:27.402: INFO: stderr: "" Aug 19 15:14:27.402: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 19 15:14:27.403: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 19 15:14:27.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7489' Aug 19 15:14:30.699: INFO: stderr: "" Aug 19 15:14:30.699: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 19 15:14:30.699: INFO: Waiting for all frontend pods to be Running. Aug 19 15:14:35.752: INFO: Waiting for frontend to serve content. Aug 19 15:14:37.135: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Aug 19 15:14:42.147: INFO: Trying to add a new entry to the guestbook. Aug 19 15:14:42.158: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 19 15:14:42.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:43.881: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:43.881: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 19 15:14:43.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:45.547: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:45.547: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 19 15:14:45.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:47.679: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:47.679: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 19 15:14:47.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:49.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:49.095: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 19 15:14:49.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:50.828: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:50.829: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 19 15:14:50.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7489' Aug 19 15:14:53.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:14:53.012: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:14:53.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7489" for this suite. • [SLOW TEST:47.229 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":187,"skipped":2806,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:14:54.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:14:58.517: INFO: Create a RollingUpdate DaemonSet Aug 19 15:14:58.880: INFO: Check that daemon pods launch on every node of the cluster Aug 19 15:14:59.130: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:14:59.424: INFO: Number of nodes with available pods: 0 Aug 19 15:14:59.424: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:01.369: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:01.443: INFO: Number of nodes with available pods: 0 Aug 19 15:15:01.443: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:02.474: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:02.688: INFO: Number of nodes with available pods: 0 Aug 19 15:15:02.688: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:03.673: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:03.749: INFO: Number of nodes with available pods: 0 Aug 19 15:15:03.749: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:04.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:05.068: INFO: Number of nodes with available pods: 0 Aug 19 15:15:05.068: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:05.452: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:05.498: INFO: Number of nodes with available pods: 0 Aug 19 15:15:05.498: INFO: Node latest-worker is running more than one daemon pod Aug 19 15:15:06.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:06.978: INFO: Number of nodes with available pods: 2 Aug 19 15:15:06.978: INFO: Number of running nodes: 2, number of available pods: 2 Aug 19 15:15:06.979: INFO: Update the DaemonSet to trigger a rollout Aug 19 15:15:07.028: INFO: Updating DaemonSet daemon-set Aug 19 15:15:21.189: INFO: Roll back the DaemonSet before rollout is complete Aug 19 15:15:21.219: INFO: Updating DaemonSet daemon-set Aug 19 15:15:21.219: INFO: Make sure DaemonSet rollback is complete Aug 19 15:15:21.311: INFO: Wrong image for pod: daemon-set-r5jf9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 19 15:15:21.311: INFO: Pod daemon-set-r5jf9 is not available Aug 19 15:15:21.344: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:22.353: INFO: Wrong image for pod: daemon-set-r5jf9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 19 15:15:22.353: INFO: Pod daemon-set-r5jf9 is not available Aug 19 15:15:22.363: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:23.558: INFO: Wrong image for pod: daemon-set-r5jf9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 19 15:15:23.558: INFO: Pod daemon-set-r5jf9 is not available Aug 19 15:15:24.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:25.141: INFO: Wrong image for pod: daemon-set-r5jf9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 19 15:15:25.141: INFO: Pod daemon-set-r5jf9 is not available Aug 19 15:15:26.296: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:27.108: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 19 15:15:27.533: INFO: Pod daemon-set-6fvtt is not available Aug 19 15:15:28.016: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1195, will wait for the garbage collector to delete the pods Aug 19 15:15:28.279: INFO: Deleting DaemonSet.extensions daemon-set took: 8.919157ms Aug 19 15:15:29.080: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.646868ms Aug 19 15:15:41.125: INFO: Number of nodes with available pods: 0 Aug 19 15:15:41.125: INFO: Number of running nodes: 0, number of available pods: 0 Aug 19 15:15:41.461: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1195/daemonsets","resourceVersion":"1523509"},"items":null} Aug 19 15:15:41.467: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1195/pods","resourceVersion":"1523509"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:15:41.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1195" for this suite. • [SLOW TEST:47.395 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":188,"skipped":2815,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:15:41.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6374 STEP: creating service affinity-clusterip in namespace services-6374 STEP: creating replication controller affinity-clusterip in namespace services-6374 I0819 15:15:43.467326 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6374, replica count: 3 I0819 15:15:46.518868 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:15:49.519556 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:15:52.520449 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:15:55.521278 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 15:15:55.532: INFO: Creating new exec pod Aug 19 15:16:03.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6374 execpod-affinity9kwlz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 19 15:16:04.854: INFO: stderr: "I0819 15:16:04.744993 3012 log.go:181] (0x400063b810) (0x40005f08c0) Create stream\nI0819 15:16:04.748384 3012 log.go:181] (0x400063b810) (0x40005f08c0) Stream added, broadcasting: 1\nI0819 15:16:04.761229 3012 log.go:181] (0x400063b810) Reply frame received for 1\nI0819 15:16:04.762260 3012 log.go:181] (0x400063b810) (0x4000634000) Create stream\nI0819 15:16:04.762370 3012 log.go:181] (0x400063b810) (0x4000634000) Stream added, broadcasting: 3\nI0819 15:16:04.764661 3012 log.go:181] (0x400063b810) Reply frame received for 3\nI0819 15:16:04.765391 3012 log.go:181] (0x400063b810) (0x400072e000) Create stream\nI0819 15:16:04.765542 3012 log.go:181] (0x400063b810) (0x400072e000) Stream added, broadcasting: 5\nI0819 15:16:04.767702 3012 log.go:181] (0x400063b810) Reply frame received for 5\nI0819 15:16:04.830468 3012 log.go:181] (0x400063b810) Data frame received for 5\nI0819 15:16:04.830753 3012 log.go:181] (0x400063b810) Data frame received for 3\nI0819 15:16:04.830930 3012 log.go:181] (0x400072e000) (5) Data frame handling\nI0819 15:16:04.831112 3012 log.go:181] (0x4000634000) (3) Data frame handling\nI0819 15:16:04.833437 3012 log.go:181] (0x400063b810) Data frame received for 1\nI0819 15:16:04.833542 3012 log.go:181] (0x40005f08c0) (1) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0819 15:16:04.834340 3012 log.go:181] (0x400072e000) (5) Data frame sent\nI0819 15:16:04.834764 3012 log.go:181] (0x40005f08c0) (1) Data frame sent\nI0819 15:16:04.834877 3012 log.go:181] (0x400063b810) Data frame received for 5\nI0819 15:16:04.834959 3012 log.go:181] (0x400072e000) (5) Data frame handling\nI0819 15:16:04.835020 3012 log.go:181] (0x400072e000) (5) Data frame sent\nI0819 15:16:04.835071 3012 log.go:181] (0x400063b810) Data frame received for 5\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0819 15:16:04.835117 3012 log.go:181] (0x400072e000) (5) Data frame handling\nI0819 15:16:04.836147 3012 log.go:181] (0x400063b810) (0x40005f08c0) Stream removed, broadcasting: 1\nI0819 15:16:04.838779 3012 log.go:181] (0x400063b810) Go away received\nI0819 15:16:04.841813 3012 log.go:181] (0x400063b810) (0x40005f08c0) Stream removed, broadcasting: 1\nI0819 15:16:04.842092 3012 log.go:181] (0x400063b810) (0x4000634000) Stream removed, broadcasting: 3\nI0819 15:16:04.842270 3012 log.go:181] (0x400063b810) (0x400072e000) Stream removed, broadcasting: 5\n" Aug 19 15:16:04.855: INFO: stdout: "" Aug 19 15:16:04.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6374 execpod-affinity9kwlz -- /bin/sh -x -c nc -zv -t -w 2 10.105.253.124 80' Aug 19 15:16:06.484: INFO: stderr: "I0819 15:16:06.356985 3033 log.go:181] (0x4000ab00b0) (0x4000a341e0) Create stream\nI0819 15:16:06.362441 3033 log.go:181] (0x4000ab00b0) (0x4000a341e0) Stream added, broadcasting: 1\nI0819 15:16:06.390926 3033 log.go:181] (0x4000ab00b0) Reply frame received for 1\nI0819 15:16:06.391483 3033 log.go:181] (0x4000ab00b0) (0x4000e0e1e0) Create stream\nI0819 15:16:06.391543 3033 log.go:181] (0x4000ab00b0) (0x4000e0e1e0) Stream added, broadcasting: 3\nI0819 15:16:06.393173 3033 log.go:181] (0x4000ab00b0) Reply frame received for 3\nI0819 15:16:06.393452 3033 log.go:181] (0x4000ab00b0) (0x4000e0e280) Create stream\nI0819 15:16:06.393513 3033 log.go:181] (0x4000ab00b0) (0x4000e0e280) Stream added, broadcasting: 5\nI0819 15:16:06.394664 3033 log.go:181] (0x4000ab00b0) Reply frame received for 5\nI0819 15:16:06.459824 3033 log.go:181] (0x4000ab00b0) Data frame received for 5\nI0819 15:16:06.460164 3033 log.go:181] (0x4000ab00b0) Data frame received for 3\nI0819 15:16:06.460287 3033 log.go:181] (0x4000e0e1e0) (3) Data frame handling\nI0819 15:16:06.461092 3033 log.go:181] (0x4000e0e280) (5) Data frame handling\nI0819 15:16:06.461337 3033 log.go:181] (0x4000ab00b0) Data frame received for 1\nI0819 15:16:06.461442 3033 log.go:181] (0x4000a341e0) (1) Data frame handling\nI0819 15:16:06.463347 3033 log.go:181] (0x4000a341e0) (1) Data frame sent\nI0819 15:16:06.463793 3033 log.go:181] (0x4000e0e280) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.253.124 80\nConnection to 10.105.253.124 80 port [tcp/http] succeeded!\nI0819 15:16:06.464231 3033 log.go:181] (0x4000ab00b0) Data frame received for 5\nI0819 15:16:06.464343 3033 log.go:181] (0x4000e0e280) (5) Data frame handling\nI0819 15:16:06.465339 3033 log.go:181] (0x4000ab00b0) (0x4000a341e0) Stream removed, broadcasting: 1\nI0819 15:16:06.468127 3033 log.go:181] (0x4000ab00b0) Go away received\nI0819 15:16:06.472026 3033 log.go:181] (0x4000ab00b0) (0x4000a341e0) Stream removed, broadcasting: 1\nI0819 15:16:06.472701 3033 log.go:181] (0x4000ab00b0) (0x4000e0e1e0) Stream removed, broadcasting: 3\nI0819 15:16:06.473096 3033 log.go:181] (0x4000ab00b0) (0x4000e0e280) Stream removed, broadcasting: 5\n" Aug 19 15:16:06.485: INFO: stdout: "" Aug 19 15:16:06.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6374 execpod-affinity9kwlz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.253.124:80/ ; done' Aug 19 15:16:08.125: INFO: stderr: "I0819 15:16:07.944611 3053 log.go:181] (0x40001bc4d0) (0x400021e1e0) Create stream\nI0819 15:16:07.947962 3053 log.go:181] (0x40001bc4d0) (0x400021e1e0) Stream added, broadcasting: 1\nI0819 15:16:07.958733 3053 log.go:181] (0x40001bc4d0) Reply frame received for 1\nI0819 15:16:07.960022 3053 log.go:181] (0x40001bc4d0) (0x4000408be0) Create stream\nI0819 15:16:07.960140 3053 log.go:181] (0x40001bc4d0) (0x4000408be0) Stream added, broadcasting: 3\nI0819 15:16:07.962099 3053 log.go:181] (0x40001bc4d0) Reply frame received for 3\nI0819 15:16:07.962522 3053 log.go:181] (0x40001bc4d0) (0x4000716500) Create stream\nI0819 15:16:07.962611 3053 log.go:181] (0x40001bc4d0) (0x4000716500) Stream added, broadcasting: 5\nI0819 15:16:07.964342 3053 log.go:181] (0x40001bc4d0) Reply frame received for 5\nI0819 15:16:08.027346 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.027908 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.028062 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.028289 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.029022 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.030040 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.030132 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.030203 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.030311 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.031168 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.031253 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.031318 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.031387 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.031445 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.031510 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.035893 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.036050 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.036182 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.036294 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.036376 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.036461 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.036583 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.036703 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.036876 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.039884 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.039973 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.040073 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.040574 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.040670 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.040779 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.040937 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.041081 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.041222 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.044152 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.044246 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.044440 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.044994 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.045126 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.049427 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.051486 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.059027 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.061754 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.061850 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.061919 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.062001 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.062065 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.062141 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.062207 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.062282 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.062339 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.062398 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.062620 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.062677 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.062735 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.062785 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.062829 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.062911 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.063016 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.063091 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.063224 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.063365 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.063416 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.063491 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.063540 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.063583 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.063637 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.064324 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.064415 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.064523 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.064711 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.064805 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.064859 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.064910 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.064954 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.065007 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.068181 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.068238 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.068297 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.068710 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.068803 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.068867 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.068936 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.069009 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.069081 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.072219 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.072310 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.072406 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.072634 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.072698 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.072799 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.072893 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.073017 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.073137 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.077028 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.077135 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.077250 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.077562 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.077680 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0819 15:16:08.077770 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.077854 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.077932 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.078043 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.078136 3053 log.go:181] (0x4000716500) (5) Data frame handling\n 2 http://10.105.253.124:80/\nI0819 15:16:08.078247 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.078437 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.082140 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.082206 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.082304 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.082744 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.082855 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.082957 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.083051 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.083122 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.083189 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.086692 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.086780 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.086874 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.087460 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.087594 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.087742 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.087841 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.088211 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.088305 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.092997 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.093084 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.093178 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.093941 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.094054 3053 log.go:181] (0x4000716500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.094227 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.094356 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.094467 3053 log.go:181] (0x4000716500) (5) Data frame sent\nI0819 15:16:08.094589 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.099281 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.099349 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.099437 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.099582 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.099745 3053 log.go:181] (0x4000716500) (5) Data frame sent\n+ echo\n+ I0819 15:16:08.099868 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.099990 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.100112 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.100245 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.100387 3053 log.go:181] (0x4000716500) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://10.105.253.124:80/\nI0819 15:16:08.100512 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.100639 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.104010 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.104102 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.104206 3053 log.go:181] (0x4000408be0) (3) Data frame sent\nI0819 15:16:08.104655 3053 log.go:181] (0x40001bc4d0) Data frame received for 3\nI0819 15:16:08.104814 3053 log.go:181] (0x4000408be0) (3) Data frame handling\nI0819 15:16:08.105095 3053 log.go:181] (0x40001bc4d0) Data frame received for 5\nI0819 15:16:08.105222 3053 log.go:181] (0x4000716500) (5) Data frame handling\nI0819 15:16:08.106270 3053 log.go:181] (0x40001bc4d0) Data frame received for 1\nI0819 15:16:08.106390 3053 log.go:181] (0x400021e1e0) (1) Data frame handling\nI0819 15:16:08.106514 3053 log.go:181] (0x400021e1e0) (1) Data frame sent\nI0819 15:16:08.107184 3053 log.go:181] (0x40001bc4d0) (0x400021e1e0) Stream removed, broadcasting: 1\nI0819 15:16:08.110212 3053 log.go:181] (0x40001bc4d0) Go away received\nI0819 15:16:08.114094 3053 log.go:181] (0x40001bc4d0) (0x400021e1e0) Stream removed, broadcasting: 1\nI0819 15:16:08.114589 3053 log.go:181] (0x40001bc4d0) (0x4000408be0) Stream removed, broadcasting: 3\nI0819 15:16:08.114917 3053 log.go:181] (0x40001bc4d0) (0x4000716500) Stream removed, broadcasting: 5\n" Aug 19 15:16:08.129: INFO: stdout: "\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x\naffinity-clusterip-k2b6x" Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Received response from host: affinity-clusterip-k2b6x Aug 19 15:16:08.129: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-6374, will wait for the garbage collector to delete the pods Aug 19 15:16:08.282: INFO: Deleting ReplicationController affinity-clusterip took: 14.940916ms Aug 19 15:16:08.682: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.671071ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:16:19.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6374" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:37.980 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":189,"skipped":2853,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:16:19.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 19 15:16:20.033: INFO: Waiting up to 5m0s for pod "pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb" in namespace "emptydir-4539" to be "Succeeded or Failed" Aug 19 15:16:20.045: INFO: Pod "pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.135757ms Aug 19 15:16:22.138: INFO: Pod "pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104581342s Aug 19 15:16:24.180: INFO: Pod "pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146463554s STEP: Saw pod success Aug 19 15:16:24.180: INFO: Pod "pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb" satisfied condition "Succeeded or Failed" Aug 19 15:16:24.217: INFO: Trying to get logs from node latest-worker pod pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb container test-container: STEP: delete the pod Aug 19 15:16:24.601: INFO: Waiting for pod pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb to disappear Aug 19 15:16:24.609: INFO: Pod pod-ca0dca5c-74ab-4bcc-8089-0be90a77d3cb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:16:24.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4539" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2875,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:16:24.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:16:25.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9716" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":191,"skipped":2892,"failed":0} ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:16:25.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 19 15:16:33.841: INFO: Successfully updated pod "adopt-release-q7bns" STEP: Checking that the Job readopts the Pod Aug 19 15:16:33.841: INFO: Waiting up to 15m0s for pod "adopt-release-q7bns" in namespace "job-4690" to be "adopted" Aug 19 15:16:33.994: INFO: Pod "adopt-release-q7bns": Phase="Running", Reason="", readiness=true. Elapsed: 152.503375ms Aug 19 15:16:36.001: INFO: Pod "adopt-release-q7bns": Phase="Running", Reason="", readiness=true. Elapsed: 2.160186395s Aug 19 15:16:36.002: INFO: Pod "adopt-release-q7bns" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 19 15:16:36.518: INFO: Successfully updated pod "adopt-release-q7bns" STEP: Checking that the Job releases the Pod Aug 19 15:16:36.518: INFO: Waiting up to 15m0s for pod "adopt-release-q7bns" in namespace "job-4690" to be "released" Aug 19 15:16:36.542: INFO: Pod "adopt-release-q7bns": Phase="Running", Reason="", readiness=true. Elapsed: 23.087067ms Aug 19 15:16:38.547: INFO: Pod "adopt-release-q7bns": Phase="Running", Reason="", readiness=true. Elapsed: 2.028866056s Aug 19 15:16:38.548: INFO: Pod "adopt-release-q7bns" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:16:38.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4690" for this suite. • [SLOW TEST:13.408 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":192,"skipped":2892,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:16:38.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6373 STEP: creating service affinity-nodeport-transition in namespace services-6373 STEP: creating replication controller affinity-nodeport-transition in namespace services-6373 I0819 15:16:39.389022 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6373, replica count: 3 I0819 15:16:42.440232 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:16:45.441008 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:16:48.441866 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:16:51.442484 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 15:16:51.461: INFO: Creating new exec pod Aug 19 15:17:00.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 19 15:17:02.835: INFO: stderr: "I0819 15:17:02.716441 3073 log.go:181] (0x400013c6e0) (0x40006b01e0) Create stream\nI0819 15:17:02.720349 3073 log.go:181] (0x400013c6e0) (0x40006b01e0) Stream added, broadcasting: 1\nI0819 15:17:02.736256 3073 log.go:181] (0x400013c6e0) Reply frame received for 1\nI0819 15:17:02.738031 3073 log.go:181] (0x400013c6e0) (0x4000223720) Create stream\nI0819 15:17:02.738207 3073 log.go:181] (0x400013c6e0) (0x4000223720) Stream added, broadcasting: 3\nI0819 15:17:02.741559 3073 log.go:181] (0x400013c6e0) Reply frame received for 3\nI0819 15:17:02.741845 3073 log.go:181] (0x400013c6e0) (0x4000318aa0) Create stream\nI0819 15:17:02.741914 3073 log.go:181] (0x400013c6e0) (0x4000318aa0) Stream added, broadcasting: 5\nI0819 15:17:02.743223 3073 log.go:181] (0x400013c6e0) Reply frame received for 5\nI0819 15:17:02.810912 3073 log.go:181] (0x400013c6e0) Data frame received for 5\nI0819 15:17:02.811314 3073 log.go:181] (0x400013c6e0) Data frame received for 3\nI0819 15:17:02.811486 3073 log.go:181] (0x4000318aa0) (5) Data frame handling\nI0819 15:17:02.811573 3073 log.go:181] (0x4000223720) (3) Data frame handling\nI0819 15:17:02.812612 3073 log.go:181] (0x400013c6e0) Data frame received for 1\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0819 15:17:02.812960 3073 log.go:181] (0x40006b01e0) (1) Data frame handling\nI0819 15:17:02.813188 3073 log.go:181] (0x4000318aa0) (5) Data frame sent\nI0819 15:17:02.813441 3073 log.go:181] (0x40006b01e0) (1) Data frame sent\nI0819 15:17:02.813547 3073 log.go:181] (0x400013c6e0) Data frame received for 5\nI0819 15:17:02.813625 3073 log.go:181] (0x4000318aa0) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0819 15:17:02.815542 3073 log.go:181] (0x4000318aa0) (5) Data frame sent\nI0819 15:17:02.815617 3073 log.go:181] (0x400013c6e0) Data frame received for 5\nI0819 15:17:02.815672 3073 log.go:181] (0x4000318aa0) (5) Data frame handling\nI0819 15:17:02.816509 3073 log.go:181] (0x400013c6e0) (0x40006b01e0) Stream removed, broadcasting: 1\nI0819 15:17:02.819466 3073 log.go:181] (0x400013c6e0) Go away received\nI0819 15:17:02.823395 3073 log.go:181] (0x400013c6e0) (0x40006b01e0) Stream removed, broadcasting: 1\nI0819 15:17:02.823709 3073 log.go:181] (0x400013c6e0) (0x4000223720) Stream removed, broadcasting: 3\nI0819 15:17:02.824019 3073 log.go:181] (0x400013c6e0) (0x4000318aa0) Stream removed, broadcasting: 5\n" Aug 19 15:17:02.836: INFO: stdout: "" Aug 19 15:17:02.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c nc -zv -t -w 2 10.108.119.231 80' Aug 19 15:17:04.541: INFO: stderr: "I0819 15:17:04.377744 3093 log.go:181] (0x4000bda000) (0x4000638000) Create stream\nI0819 15:17:04.381439 3093 log.go:181] (0x4000bda000) (0x4000638000) Stream added, broadcasting: 1\nI0819 15:17:04.394054 3093 log.go:181] (0x4000bda000) Reply frame received for 1\nI0819 15:17:04.394625 3093 log.go:181] (0x4000bda000) (0x40006c8000) Create stream\nI0819 15:17:04.394685 3093 log.go:181] (0x4000bda000) (0x40006c8000) Stream added, broadcasting: 3\nI0819 15:17:04.396613 3093 log.go:181] (0x4000bda000) Reply frame received for 3\nI0819 15:17:04.397251 3093 log.go:181] (0x4000bda000) (0x4000720000) Create stream\nI0819 15:17:04.397373 3093 log.go:181] (0x4000bda000) (0x4000720000) Stream added, broadcasting: 5\nI0819 15:17:04.398889 3093 log.go:181] (0x4000bda000) Reply frame received for 5\nI0819 15:17:04.468983 3093 log.go:181] (0x4000bda000) Data frame received for 3\nI0819 15:17:04.469355 3093 log.go:181] (0x4000bda000) Data frame received for 5\nI0819 15:17:04.469538 3093 log.go:181] (0x4000720000) (5) Data frame handling\nI0819 15:17:04.469801 3093 log.go:181] (0x40006c8000) (3) Data frame handling\nI0819 15:17:04.470064 3093 log.go:181] (0x4000bda000) Data frame received for 1\nI0819 15:17:04.470196 3093 log.go:181] (0x4000638000) (1) Data frame handling\nI0819 15:17:04.471758 3093 log.go:181] (0x4000720000) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.119.231 80\nConnection to 10.108.119.231 80 port [tcp/http] succeeded!\nI0819 15:17:04.472341 3093 log.go:181] (0x4000bda000) Data frame received for 5\nI0819 15:17:04.472398 3093 log.go:181] (0x4000720000) (5) Data frame handling\nI0819 15:17:04.472886 3093 log.go:181] (0x4000638000) (1) Data frame sent\nI0819 15:17:04.473549 3093 log.go:181] (0x4000bda000) (0x4000638000) Stream removed, broadcasting: 1\nI0819 15:17:04.475908 3093 log.go:181] (0x4000bda000) Go away received\nI0819 15:17:04.531047 3093 log.go:181] (0x4000bda000) (0x4000638000) Stream removed, broadcasting: 1\nI0819 15:17:04.531372 3093 log.go:181] (0x4000bda000) (0x40006c8000) Stream removed, broadcasting: 3\nI0819 15:17:04.531539 3093 log.go:181] (0x4000bda000) (0x4000720000) Stream removed, broadcasting: 5\n" Aug 19 15:17:04.542: INFO: stdout: "" Aug 19 15:17:04.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31163' Aug 19 15:17:06.264: INFO: stderr: "I0819 15:17:06.172439 3113 log.go:181] (0x400014c420) (0x40009b60a0) Create stream\nI0819 15:17:06.176247 3113 log.go:181] (0x400014c420) (0x40009b60a0) Stream added, broadcasting: 1\nI0819 15:17:06.186813 3113 log.go:181] (0x400014c420) Reply frame received for 1\nI0819 15:17:06.187551 3113 log.go:181] (0x400014c420) (0x4000c60460) Create stream\nI0819 15:17:06.187679 3113 log.go:181] (0x400014c420) (0x4000c60460) Stream added, broadcasting: 3\nI0819 15:17:06.189947 3113 log.go:181] (0x400014c420) Reply frame received for 3\nI0819 15:17:06.190248 3113 log.go:181] (0x400014c420) (0x40002b3d60) Create stream\nI0819 15:17:06.190316 3113 log.go:181] (0x400014c420) (0x40002b3d60) Stream added, broadcasting: 5\nI0819 15:17:06.192043 3113 log.go:181] (0x400014c420) Reply frame received for 5\nI0819 15:17:06.243416 3113 log.go:181] (0x400014c420) Data frame received for 5\nI0819 15:17:06.243734 3113 log.go:181] (0x400014c420) Data frame received for 3\nI0819 15:17:06.244343 3113 log.go:181] (0x4000c60460) (3) Data frame handling\nI0819 15:17:06.244873 3113 log.go:181] (0x40002b3d60) (5) Data frame handling\nI0819 15:17:06.245148 3113 log.go:181] (0x400014c420) Data frame received for 1\nI0819 15:17:06.245311 3113 log.go:181] (0x40009b60a0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31163\nConnection to 172.18.0.11 31163 port [tcp/31163] succeeded!\nI0819 15:17:06.247132 3113 log.go:181] (0x40002b3d60) (5) Data frame sent\nI0819 15:17:06.247804 3113 log.go:181] (0x400014c420) Data frame received for 5\nI0819 15:17:06.247862 3113 log.go:181] (0x40002b3d60) (5) Data frame handling\nI0819 15:17:06.248116 3113 log.go:181] (0x40009b60a0) (1) Data frame sent\nI0819 15:17:06.249692 3113 log.go:181] (0x400014c420) (0x40009b60a0) Stream removed, broadcasting: 1\nI0819 15:17:06.250280 3113 log.go:181] (0x400014c420) Go away received\nI0819 15:17:06.253577 3113 log.go:181] (0x400014c420) (0x40009b60a0) Stream removed, broadcasting: 1\nI0819 15:17:06.253821 3113 log.go:181] (0x400014c420) (0x4000c60460) Stream removed, broadcasting: 3\nI0819 15:17:06.253994 3113 log.go:181] (0x400014c420) (0x40002b3d60) Stream removed, broadcasting: 5\n" Aug 19 15:17:06.265: INFO: stdout: "" Aug 19 15:17:06.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31163' Aug 19 15:17:07.850: INFO: stderr: "I0819 15:17:07.752689 3133 log.go:181] (0x40007e3130) (0x4000308460) Create stream\nI0819 15:17:07.755216 3133 log.go:181] (0x40007e3130) (0x4000308460) Stream added, broadcasting: 1\nI0819 15:17:07.766449 3133 log.go:181] (0x40007e3130) Reply frame received for 1\nI0819 15:17:07.766965 3133 log.go:181] (0x40007e3130) (0x4000824140) Create stream\nI0819 15:17:07.767022 3133 log.go:181] (0x40007e3130) (0x4000824140) Stream added, broadcasting: 3\nI0819 15:17:07.768340 3133 log.go:181] (0x40007e3130) Reply frame received for 3\nI0819 15:17:07.768641 3133 log.go:181] (0x40007e3130) (0x40008248c0) Create stream\nI0819 15:17:07.768719 3133 log.go:181] (0x40007e3130) (0x40008248c0) Stream added, broadcasting: 5\nI0819 15:17:07.769973 3133 log.go:181] (0x40007e3130) Reply frame received for 5\nI0819 15:17:07.829020 3133 log.go:181] (0x40007e3130) Data frame received for 5\nI0819 15:17:07.829534 3133 log.go:181] (0x40007e3130) Data frame received for 1\nI0819 15:17:07.829814 3133 log.go:181] (0x40007e3130) Data frame received for 3\nI0819 15:17:07.830009 3133 log.go:181] (0x4000824140) (3) Data frame handling\nI0819 15:17:07.830222 3133 log.go:181] (0x40008248c0) (5) Data frame handling\nI0819 15:17:07.830383 3133 log.go:181] (0x4000308460) (1) Data frame handling\nI0819 15:17:07.832362 3133 log.go:181] (0x40008248c0) (5) Data frame sent\nI0819 15:17:07.832514 3133 log.go:181] (0x40007e3130) Data frame received for 5\nI0819 15:17:07.832609 3133 log.go:181] (0x40008248c0) (5) Data frame handling\nI0819 15:17:07.832673 3133 log.go:181] (0x4000308460) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31163\nConnection to 172.18.0.14 31163 port [tcp/31163] succeeded!\nI0819 15:17:07.834227 3133 log.go:181] (0x40007e3130) (0x4000308460) Stream removed, broadcasting: 1\nI0819 15:17:07.835861 3133 log.go:181] (0x40007e3130) Go away received\nI0819 15:17:07.838823 3133 log.go:181] (0x40007e3130) (0x4000308460) Stream removed, broadcasting: 1\nI0819 15:17:07.839068 3133 log.go:181] (0x40007e3130) (0x4000824140) Stream removed, broadcasting: 3\nI0819 15:17:07.839244 3133 log.go:181] (0x40007e3130) (0x40008248c0) Stream removed, broadcasting: 5\n" Aug 19 15:17:07.851: INFO: stdout: "" Aug 19 15:17:07.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31163/ ; done' Aug 19 15:17:09.634: INFO: stderr: "I0819 15:17:09.449521 3153 log.go:181] (0x400013a420) (0x40009559a0) Create stream\nI0819 15:17:09.454813 3153 log.go:181] (0x400013a420) (0x40009559a0) Stream added, broadcasting: 1\nI0819 15:17:09.468509 3153 log.go:181] (0x400013a420) Reply frame received for 1\nI0819 15:17:09.469389 3153 log.go:181] (0x400013a420) (0x4000955c20) Create stream\nI0819 15:17:09.469488 3153 log.go:181] (0x400013a420) (0x4000955c20) Stream added, broadcasting: 3\nI0819 15:17:09.471236 3153 log.go:181] (0x400013a420) Reply frame received for 3\nI0819 15:17:09.471477 3153 log.go:181] (0x400013a420) (0x40001457c0) Create stream\nI0819 15:17:09.471546 3153 log.go:181] (0x400013a420) (0x40001457c0) Stream added, broadcasting: 5\nI0819 15:17:09.473007 3153 log.go:181] (0x400013a420) Reply frame received for 5\nI0819 15:17:09.536488 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.537257 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.538107 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.539669 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.539762 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.539854 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.539945 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.540032 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.540646 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.541184 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.541343 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.541444 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.541528 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.541607 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.541703 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.543968 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.544041 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.544117 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.544495 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.544573 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -sI0819 15:17:09.544650 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.544825 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.544927 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.545010 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.545083 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.545137 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.545213 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.549042 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.549130 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.549218 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.549482 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.549611 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -qI0819 15:17:09.549730 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.549868 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.549973 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.550085 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.550204 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.550309 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.550395 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.554494 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.554568 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.554690 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.555228 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.555329 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.555434 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.555568 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.555694 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.555818 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.559535 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.559656 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.559770 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.559914 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0819 15:17:09.560014 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.560122 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.560229 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.560375 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.560474 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.560563 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n 2 http://172.18.0.11:31163/\nI0819 15:17:09.560673 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.560880 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.565987 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.566064 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.566160 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.566317 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.566467 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.566612 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.566752 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.566861 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.566992 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.571510 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.571625 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.571783 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.572349 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.572442 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.572576 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.572697 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.572844 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.572914 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.577096 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.577172 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.577264 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.577548 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.577638 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.577741 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.577825 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.577939 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.578028 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.581849 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.581940 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.582074 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.582625 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.582792 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.582923 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.583077 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.583204 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.583335 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.586046 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.586141 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.586242 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.587252 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.587351 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.587433 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.587613 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.587700 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.587779 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.590437 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.590532 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.590633 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.590719 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.590831 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.590969 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.591088 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.591212 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.591323 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.595251 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.595312 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.595376 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.595900 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.595986 3153 log.go:181] (0x40001457c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.596111 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.596228 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.596308 3153 log.go:181] (0x40001457c0) (5) Data frame sent\nI0819 15:17:09.596396 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.598957 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.599034 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.599220 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.599377 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.599464 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.599562 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/I0819 15:17:09.599645 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.599718 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.599806 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.599882 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.599943 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.600016 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n\nI0819 15:17:09.605016 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.605072 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.605136 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.605576 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.605652 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.605716 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.605773 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.605827 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.605894 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.609416 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.609517 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.609600 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.610324 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.610418 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.610513 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.610595 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.610749 3153 log.go:181] (0x40001457c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:09.610823 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.614435 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.614505 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.614581 3153 log.go:181] (0x4000955c20) (3) Data frame sent\nI0819 15:17:09.614946 3153 log.go:181] (0x400013a420) Data frame received for 5\nI0819 15:17:09.615015 3153 log.go:181] (0x40001457c0) (5) Data frame handling\nI0819 15:17:09.615143 3153 log.go:181] (0x400013a420) Data frame received for 3\nI0819 15:17:09.615225 3153 log.go:181] (0x4000955c20) (3) Data frame handling\nI0819 15:17:09.617441 3153 log.go:181] (0x400013a420) Data frame received for 1\nI0819 15:17:09.617501 3153 log.go:181] (0x40009559a0) (1) Data frame handling\nI0819 15:17:09.617564 3153 log.go:181] (0x40009559a0) (1) Data frame sent\nI0819 15:17:09.618685 3153 log.go:181] (0x400013a420) (0x40009559a0) Stream removed, broadcasting: 1\nI0819 15:17:09.621324 3153 log.go:181] (0x400013a420) Go away received\nI0819 15:17:09.624475 3153 log.go:181] (0x400013a420) (0x40009559a0) Stream removed, broadcasting: 1\nI0819 15:17:09.624696 3153 log.go:181] (0x400013a420) (0x4000955c20) Stream removed, broadcasting: 3\nI0819 15:17:09.624981 3153 log.go:181] (0x400013a420) (0x40001457c0) Stream removed, broadcasting: 5\n" Aug 19 15:17:09.638: INFO: stdout: "\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-876jj\naffinity-nodeport-transition-876jj\naffinity-nodeport-transition-876jj\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-876jj\naffinity-nodeport-transition-876jj\naffinity-nodeport-transition-nz2hp\naffinity-nodeport-transition-nz2hp" Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-876jj Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-876jj Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-876jj Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-876jj Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-876jj Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.639: INFO: Received response from host: affinity-nodeport-transition-nz2hp Aug 19 15:17:09.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6373 execpod-affinity4zt4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:31163/ ; done' Aug 19 15:17:11.435: INFO: stderr: "I0819 15:17:11.251311 3173 log.go:181] (0x4000e21760) (0x4000e98aa0) Create stream\nI0819 15:17:11.253755 3173 log.go:181] (0x4000e21760) (0x4000e98aa0) Stream added, broadcasting: 1\nI0819 15:17:11.272516 3173 log.go:181] (0x4000e21760) Reply frame received for 1\nI0819 15:17:11.273509 3173 log.go:181] (0x4000e21760) (0x4000e98000) Create stream\nI0819 15:17:11.273618 3173 log.go:181] (0x4000e21760) (0x4000e98000) Stream added, broadcasting: 3\nI0819 15:17:11.275424 3173 log.go:181] (0x4000e21760) Reply frame received for 3\nI0819 15:17:11.275682 3173 log.go:181] (0x4000e21760) (0x4000a9b360) Create stream\nI0819 15:17:11.275739 3173 log.go:181] (0x4000e21760) (0x4000a9b360) Stream added, broadcasting: 5\nI0819 15:17:11.276705 3173 log.go:181] (0x4000e21760) Reply frame received for 5\nI0819 15:17:11.322057 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.322407 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.322585 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.322705 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.323799 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.324041 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.325355 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.325432 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.325515 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.326195 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.326264 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.326367 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.326491 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.326580 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.326668 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.331089 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.331175 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.331258 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.331774 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.331884 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.331971 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.332069 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.332151 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.332228 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.338726 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.338808 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.338918 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.339534 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.339621 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.339688 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.339754 3173 log.go:181] (0x4000e21760) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.339813 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.339889 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.343995 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.344068 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.344164 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.344853 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.344951 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.345054 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.345168 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.345283 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.345394 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.349997 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.350125 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.350274 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.350941 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.351064 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.351198 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.351331 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.351498 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.351623 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.355903 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.356062 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.356244 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.356646 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.356838 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.356925 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.356995 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.357043 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.357116 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.363888 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.364039 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.364135 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.364255 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.364344 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.364429 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.367613 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.367673 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.367741 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.368397 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.368485 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.368549 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.368634 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.368696 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.368788 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.372636 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.372818 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.372941 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.373754 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.373850 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.373939 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.374032 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.374113 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.374208 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0819 15:17:11.374298 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.374366 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.374478 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n 2 http://172.18.0.11:31163/\nI0819 15:17:11.377514 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.377635 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.377787 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.377998 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.378063 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.378120 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -sI0819 15:17:11.378175 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.378224 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.378278 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.378330 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.378378 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.378438 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.382377 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.382451 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.382530 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.383215 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.383311 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.383394 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.383487 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.383548 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.383618 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.387544 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.387645 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.387742 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.388643 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.388847 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.388937 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.389055 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.389148 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.389224 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.393177 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.393299 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.393476 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.394127 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.394261 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:31163/\nI0819 15:17:11.394392 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.394538 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.394644 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.394739 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.399046 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.399154 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.399254 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.399930 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.400012 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.400146 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.400311 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0819 15:17:11.400411 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.400498 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\nI0819 15:17:11.400583 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.400691 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.400907 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n 2 http://172.18.0.11:31163/\nI0819 15:17:11.406658 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.406788 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.406927 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.407073 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.407209 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.407363 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.407501 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.407607 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0819 15:17:11.407719 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.407788 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.407890 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.407988 3173 log.go:181] (0x4000a9b360) (5) Data frame sent\n http://172.18.0.11:31163/\nI0819 15:17:11.411442 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.411525 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.411611 3173 log.go:181] (0x4000e98000) (3) Data frame sent\nI0819 15:17:11.412198 3173 log.go:181] (0x4000e21760) Data frame received for 5\nI0819 15:17:11.412328 3173 log.go:181] (0x4000a9b360) (5) Data frame handling\nI0819 15:17:11.412439 3173 log.go:181] (0x4000e21760) Data frame received for 3\nI0819 15:17:11.412528 3173 log.go:181] (0x4000e98000) (3) Data frame handling\nI0819 15:17:11.414201 3173 log.go:181] (0x4000e21760) Data frame received for 1\nI0819 15:17:11.414270 3173 log.go:181] (0x4000e98aa0) (1) Data frame handling\nI0819 15:17:11.414343 3173 log.go:181] (0x4000e98aa0) (1) Data frame sent\nI0819 15:17:11.415322 3173 log.go:181] (0x4000e21760) (0x4000e98aa0) Stream removed, broadcasting: 1\nI0819 15:17:11.417882 3173 log.go:181] (0x4000e21760) Go away received\nI0819 15:17:11.421923 3173 log.go:181] (0x4000e21760) (0x4000e98aa0) Stream removed, broadcasting: 1\nI0819 15:17:11.422425 3173 log.go:181] (0x4000e21760) (0x4000e98000) Stream removed, broadcasting: 3\nI0819 15:17:11.422723 3173 log.go:181] (0x4000e21760) (0x4000a9b360) Stream removed, broadcasting: 5\n" Aug 19 15:17:11.440: INFO: stdout: "\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r\naffinity-nodeport-transition-2th7r" Aug 19 15:17:11.440: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.440: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.440: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.440: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.440: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Received response from host: affinity-nodeport-transition-2th7r Aug 19 15:17:11.441: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6373, will wait for the garbage collector to delete the pods Aug 19 15:17:11.570: INFO: Deleting ReplicationController affinity-nodeport-transition took: 8.536386ms Aug 19 15:17:11.670: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.718133ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:17:20.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6373" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:42.019 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":193,"skipped":2893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:17:20.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0819 15:17:32.290813 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 19 15:18:34.341: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:18:34.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5729" for this suite. • [SLOW TEST:73.773 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":194,"skipped":2926,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:18:34.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-8fa4d3d5-5e33-4a03-bc76-24927de05dde STEP: Creating a pod to test consume configMaps Aug 19 15:18:34.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05" in namespace "configmap-3708" to be "Succeeded or Failed" Aug 19 15:18:35.012: INFO: Pod "pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 15.489285ms Aug 19 15:18:37.133: INFO: Pod "pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137301376s Aug 19 15:18:39.140: INFO: Pod "pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143409427s STEP: Saw pod success Aug 19 15:18:39.140: INFO: Pod "pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05" satisfied condition "Succeeded or Failed" Aug 19 15:18:39.143: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05 container configmap-volume-test: STEP: delete the pod Aug 19 15:18:39.843: INFO: Waiting for pod pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05 to disappear Aug 19 15:18:39.929: INFO: Pod pod-configmaps-3432a6fc-0740-4e49-86fa-f416e028cf05 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:18:39.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3708" for this suite. • [SLOW TEST:5.604 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":2929,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:18:39.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-fdf9efe3-c684-4f80-85f6-8c99d76bd33b STEP: Creating a pod to test consume secrets Aug 19 15:18:40.224: INFO: Waiting up to 5m0s for pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042" in namespace "secrets-9305" to be "Succeeded or Failed" Aug 19 15:18:40.309: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042": Phase="Pending", Reason="", readiness=false. Elapsed: 84.686444ms Aug 19 15:18:42.997: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772492516s Aug 19 15:18:45.079: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.855363176s Aug 19 15:18:47.087: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042": Phase="Pending", Reason="", readiness=false. Elapsed: 6.862693239s Aug 19 15:18:49.139: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.915302218s STEP: Saw pod success Aug 19 15:18:49.139: INFO: Pod "pod-secrets-6650db80-22a2-47a0-9055-4e5568907042" satisfied condition "Succeeded or Failed" Aug 19 15:18:49.143: INFO: Trying to get logs from node latest-worker pod pod-secrets-6650db80-22a2-47a0-9055-4e5568907042 container secret-volume-test: STEP: delete the pod Aug 19 15:18:49.308: INFO: Waiting for pod pod-secrets-6650db80-22a2-47a0-9055-4e5568907042 to disappear Aug 19 15:18:49.325: INFO: Pod pod-secrets-6650db80-22a2-47a0-9055-4e5568907042 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:18:49.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9305" for this suite. • [SLOW TEST:9.376 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":2937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:18:49.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 19 15:18:49.776: INFO: Waiting up to 5m0s for pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795" in namespace "containers-4032" to be "Succeeded or Failed" Aug 19 15:18:49.986: INFO: Pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795": Phase="Pending", Reason="", readiness=false. Elapsed: 210.507891ms Aug 19 15:18:51.993: INFO: Pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216976945s Aug 19 15:18:53.999: INFO: Pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223501243s Aug 19 15:18:56.007: INFO: Pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231112654s STEP: Saw pod success Aug 19 15:18:56.007: INFO: Pod "client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795" satisfied condition "Succeeded or Failed" Aug 19 15:18:56.012: INFO: Trying to get logs from node latest-worker2 pod client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795 container test-container: STEP: delete the pod Aug 19 15:18:56.050: INFO: Waiting for pod client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795 to disappear Aug 19 15:18:56.054: INFO: Pod client-containers-25cd775d-4e39-4775-9be7-adf6afbb1795 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:18:56.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4032" for this suite. • [SLOW TEST:6.728 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:18:56.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:18:56.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 19 15:18:57.492: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:18:57Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:18:57Z]] name:name1 resourceVersion:1524524 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f6e105cb-1f09-41fd-b0f0-f863ae67ac4e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 19 15:19:07.543: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:19:07Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:19:07Z]] name:name2 resourceVersion:1524560 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:63233ca6-1509-4604-8f9f-5ecf5eab0312] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 19 15:19:17.612: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:18:57Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:19:17Z]] name:name1 resourceVersion:1524590 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f6e105cb-1f09-41fd-b0f0-f863ae67ac4e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 19 15:19:27.622: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:19:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:19:27Z]] name:name2 resourceVersion:1524617 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:63233ca6-1509-4604-8f9f-5ecf5eab0312] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 19 15:19:37.909: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:18:57Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:19:17Z]] name:name1 resourceVersion:1524643 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f6e105cb-1f09-41fd-b0f0-f863ae67ac4e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 19 15:19:48.030: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-19T15:19:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-19T15:19:27Z]] name:name2 resourceVersion:1524671 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:63233ca6-1509-4604-8f9f-5ecf5eab0312] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:19:58.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8508" for this suite. • [SLOW TEST:62.542 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":198,"skipped":2979,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:19:58.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-cdf7270d-9100-4d4a-a1f2-6723aa35fa30 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:19:58.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3629" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":199,"skipped":2981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:19:58.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-75707e46-be15-4bf9-b5ac-54b3bfe928e6 STEP: Creating a pod to test consume configMaps Aug 19 15:19:58.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b" in namespace "configmap-3253" to be "Succeeded or Failed" Aug 19 15:19:59.010: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.776886ms Aug 19 15:20:01.027: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029378164s Aug 19 15:20:03.440: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441891605s Aug 19 15:20:06.065: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.066950485s Aug 19 15:20:08.082: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.084485206s STEP: Saw pod success Aug 19 15:20:08.083: INFO: Pod "pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b" satisfied condition "Succeeded or Failed" Aug 19 15:20:08.090: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b container configmap-volume-test: STEP: delete the pod Aug 19 15:20:08.346: INFO: Waiting for pod pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b to disappear Aug 19 15:20:08.350: INFO: Pod pod-configmaps-7c77f0f7-d75e-45cd-ab58-cd678917201b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:20:08.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3253" for this suite. • [SLOW TEST:9.448 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3006,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:20:08.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 19 15:20:22.502: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 19 15:20:22.511: INFO: Pod pod-with-prestop-http-hook still exists Aug 19 15:20:24.512: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 19 15:20:24.530: INFO: Pod pod-with-prestop-http-hook still exists Aug 19 15:20:26.512: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 19 15:20:26.541: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:20:26.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5909" for this suite. • [SLOW TEST:18.204 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":201,"skipped":3007,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:20:26.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 15:20:32.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-cbccbf6bb\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Aug 19 15:20:34.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:20:36.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:20:38.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:20:40.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447232, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447231, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 15:20:43.677: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:20:44.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4973" for this suite. STEP: Destroying namespace "webhook-4973-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":202,"skipped":3007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:20:45.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a723fd60-7bd1-4217-a75c-c02e9dd82bd6 STEP: Creating a pod to test consume configMaps Aug 19 15:20:46.418: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6" in namespace "projected-3693" to be "Succeeded or Failed" Aug 19 15:20:46.638: INFO: Pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6": Phase="Pending", Reason="", readiness=false. Elapsed: 219.758744ms Aug 19 15:20:48.818: INFO: Pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399869002s Aug 19 15:20:50.993: INFO: Pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5752601s Aug 19 15:20:53.376: INFO: Pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.95815791s STEP: Saw pod success Aug 19 15:20:53.376: INFO: Pod "pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6" satisfied condition "Succeeded or Failed" Aug 19 15:20:53.410: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6 container projected-configmap-volume-test: STEP: delete the pod Aug 19 15:20:53.722: INFO: Waiting for pod pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6 to disappear Aug 19 15:20:53.745: INFO: Pod pod-projected-configmaps-62aa331d-16be-486d-80d9-545fde412df6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:20:53.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3693" for this suite. • [SLOW TEST:8.795 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3034,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:20:54.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 19 15:20:54.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config cluster-info' Aug 19 15:20:55.845: INFO: stderr: "" Aug 19 15:20:55.846: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:20:55.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3859" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":204,"skipped":3038,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:20:55.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-883a65af-21d7-44e1-a7cd-cceab678bce3 STEP: Creating a pod to test consume secrets Aug 19 15:20:55.973: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685" in namespace "projected-3578" to be "Succeeded or Failed" Aug 19 15:20:55.986: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 12.699521ms Aug 19 15:20:58.294: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320402404s Aug 19 15:21:00.962: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 4.988899751s Aug 19 15:21:03.321: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 7.347959932s Aug 19 15:21:05.560: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 9.586806068s Aug 19 15:21:07.644: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 11.670624411s Aug 19 15:21:10.429: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Pending", Reason="", readiness=false. Elapsed: 14.455978056s Aug 19 15:21:13.612: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.638856409s STEP: Saw pod success Aug 19 15:21:13.613: INFO: Pod "pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685" satisfied condition "Succeeded or Failed" Aug 19 15:21:13.618: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685 container secret-volume-test: STEP: delete the pod Aug 19 15:21:16.158: INFO: Waiting for pod pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685 to disappear Aug 19 15:21:16.483: INFO: Pod pod-projected-secrets-25cd4525-181f-472a-9995-1c4ddaf4f685 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:21:16.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3578" for this suite. • [SLOW TEST:21.051 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3038,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:21:16.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7855 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7855 I0819 15:21:19.209368 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7855, replica count: 2 I0819 15:21:22.261023 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:21:25.261901 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:21:28.262723 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:21:31.263694 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 15:21:31.264: INFO: Creating new exec pod Aug 19 15:21:42.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7855 execpod9vj69 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 19 15:21:54.848: INFO: stderr: "I0819 15:21:54.708659 3214 log.go:181] (0x400073c000) (0x40007c4140) Create stream\nI0819 15:21:54.711248 3214 log.go:181] (0x400073c000) (0x40007c4140) Stream added, broadcasting: 1\nI0819 15:21:54.723146 3214 log.go:181] (0x400073c000) Reply frame received for 1\nI0819 15:21:54.723757 3214 log.go:181] (0x400073c000) (0x400067a000) Create stream\nI0819 15:21:54.723822 3214 log.go:181] (0x400073c000) (0x400067a000) Stream added, broadcasting: 3\nI0819 15:21:54.725751 3214 log.go:181] (0x400073c000) Reply frame received for 3\nI0819 15:21:54.726245 3214 log.go:181] (0x400073c000) (0x4000d54000) Create stream\nI0819 15:21:54.726360 3214 log.go:181] (0x400073c000) (0x4000d54000) Stream added, broadcasting: 5\nI0819 15:21:54.728098 3214 log.go:181] (0x400073c000) Reply frame received for 5\nI0819 15:21:54.814291 3214 log.go:181] (0x400073c000) Data frame received for 5\nI0819 15:21:54.814478 3214 log.go:181] (0x4000d54000) (5) Data frame handling\nI0819 15:21:54.814895 3214 log.go:181] (0x4000d54000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0819 15:21:54.829889 3214 log.go:181] (0x400073c000) Data frame received for 5\nI0819 15:21:54.830066 3214 log.go:181] (0x4000d54000) (5) Data frame handling\nI0819 15:21:54.830167 3214 log.go:181] (0x4000d54000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0819 15:21:54.830352 3214 log.go:181] (0x400073c000) Data frame received for 3\nI0819 15:21:54.830574 3214 log.go:181] (0x400067a000) (3) Data frame handling\nI0819 15:21:54.830794 3214 log.go:181] (0x400073c000) Data frame received for 5\nI0819 15:21:54.830887 3214 log.go:181] (0x4000d54000) (5) Data frame handling\nI0819 15:21:54.831530 3214 log.go:181] (0x400073c000) Data frame received for 1\nI0819 15:21:54.831632 3214 log.go:181] (0x40007c4140) (1) Data frame handling\nI0819 15:21:54.831779 3214 log.go:181] (0x40007c4140) (1) Data frame sent\nI0819 15:21:54.833010 3214 log.go:181] (0x400073c000) (0x40007c4140) Stream removed, broadcasting: 1\nI0819 15:21:54.835575 3214 log.go:181] (0x400073c000) Go away received\nI0819 15:21:54.837917 3214 log.go:181] (0x400073c000) (0x40007c4140) Stream removed, broadcasting: 1\nI0819 15:21:54.838234 3214 log.go:181] (0x400073c000) (0x400067a000) Stream removed, broadcasting: 3\nI0819 15:21:54.838466 3214 log.go:181] (0x400073c000) (0x4000d54000) Stream removed, broadcasting: 5\n" Aug 19 15:21:54.849: INFO: stdout: "" Aug 19 15:21:54.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7855 execpod9vj69 -- /bin/sh -x -c nc -zv -t -w 2 10.111.11.218 80' Aug 19 15:21:56.608: INFO: stderr: "I0819 15:21:56.512564 3235 log.go:181] (0x400030c160) (0x4000392140) Create stream\nI0819 15:21:56.514878 3235 log.go:181] (0x400030c160) (0x4000392140) Stream added, broadcasting: 1\nI0819 15:21:56.526502 3235 log.go:181] (0x400030c160) Reply frame received for 1\nI0819 15:21:56.527630 3235 log.go:181] (0x400030c160) (0x4000449040) Create stream\nI0819 15:21:56.527751 3235 log.go:181] (0x400030c160) (0x4000449040) Stream added, broadcasting: 3\nI0819 15:21:56.532180 3235 log.go:181] (0x400030c160) Reply frame received for 3\nI0819 15:21:56.532427 3235 log.go:181] (0x400030c160) (0x40001bd040) Create stream\nI0819 15:21:56.532497 3235 log.go:181] (0x400030c160) (0x40001bd040) Stream added, broadcasting: 5\nI0819 15:21:56.533545 3235 log.go:181] (0x400030c160) Reply frame received for 5\nI0819 15:21:56.587057 3235 log.go:181] (0x400030c160) Data frame received for 5\nI0819 15:21:56.587451 3235 log.go:181] (0x40001bd040) (5) Data frame handling\nI0819 15:21:56.587710 3235 log.go:181] (0x400030c160) Data frame received for 1\nI0819 15:21:56.587811 3235 log.go:181] (0x4000392140) (1) Data frame handling\nI0819 15:21:56.587973 3235 log.go:181] (0x400030c160) Data frame received for 3\nI0819 15:21:56.588101 3235 log.go:181] (0x4000449040) (3) Data frame handling\n+ nc -zv -t -w 2 10.111.11.218 80\nConnection to 10.111.11.218 80 port [tcp/http] succeeded!\nI0819 15:21:56.590354 3235 log.go:181] (0x40001bd040) (5) Data frame sent\nI0819 15:21:56.590850 3235 log.go:181] (0x4000392140) (1) Data frame sent\nI0819 15:21:56.591390 3235 log.go:181] (0x400030c160) Data frame received for 5\nI0819 15:21:56.591498 3235 log.go:181] (0x40001bd040) (5) Data frame handling\nI0819 15:21:56.594025 3235 log.go:181] (0x400030c160) (0x4000392140) Stream removed, broadcasting: 1\nI0819 15:21:56.594674 3235 log.go:181] (0x400030c160) Go away received\nI0819 15:21:56.598827 3235 log.go:181] (0x400030c160) (0x4000392140) Stream removed, broadcasting: 1\nI0819 15:21:56.599120 3235 log.go:181] (0x400030c160) (0x4000449040) Stream removed, broadcasting: 3\nI0819 15:21:56.599316 3235 log.go:181] (0x400030c160) (0x40001bd040) Stream removed, broadcasting: 5\n" Aug 19 15:21:56.609: INFO: stdout: "" Aug 19 15:21:56.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7855 execpod9vj69 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31078' Aug 19 15:21:58.433: INFO: stderr: "I0819 15:21:58.338481 3255 log.go:181] (0x400003b290) (0x40000d4fa0) Create stream\nI0819 15:21:58.341445 3255 log.go:181] (0x400003b290) (0x40000d4fa0) Stream added, broadcasting: 1\nI0819 15:21:58.350527 3255 log.go:181] (0x400003b290) Reply frame received for 1\nI0819 15:21:58.351181 3255 log.go:181] (0x400003b290) (0x40000d50e0) Create stream\nI0819 15:21:58.351248 3255 log.go:181] (0x400003b290) (0x40000d50e0) Stream added, broadcasting: 3\nI0819 15:21:58.352694 3255 log.go:181] (0x400003b290) Reply frame received for 3\nI0819 15:21:58.353081 3255 log.go:181] (0x400003b290) (0x4000930000) Create stream\nI0819 15:21:58.353152 3255 log.go:181] (0x400003b290) (0x4000930000) Stream added, broadcasting: 5\nI0819 15:21:58.354177 3255 log.go:181] (0x400003b290) Reply frame received for 5\nI0819 15:21:58.414274 3255 log.go:181] (0x400003b290) Data frame received for 3\nI0819 15:21:58.414559 3255 log.go:181] (0x40000d50e0) (3) Data frame handling\nI0819 15:21:58.415661 3255 log.go:181] (0x400003b290) Data frame received for 5\nI0819 15:21:58.415749 3255 log.go:181] (0x4000930000) (5) Data frame handling\nI0819 15:21:58.416636 3255 log.go:181] (0x400003b290) Data frame received for 1\nI0819 15:21:58.416834 3255 log.go:181] (0x40000d4fa0) (1) Data frame handling\nI0819 15:21:58.417346 3255 log.go:181] (0x4000930000) (5) Data frame sent\nI0819 15:21:58.417562 3255 log.go:181] (0x40000d4fa0) (1) Data frame sent\nI0819 15:21:58.418728 3255 log.go:181] (0x400003b290) Data frame received for 5\nI0819 15:21:58.418791 3255 log.go:181] (0x4000930000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31078\nConnection to 172.18.0.11 31078 port [tcp/31078] succeeded!\nI0819 15:21:58.419921 3255 log.go:181] (0x400003b290) (0x40000d4fa0) Stream removed, broadcasting: 1\nI0819 15:21:58.421046 3255 log.go:181] (0x400003b290) Go away received\nI0819 15:21:58.423342 3255 log.go:181] (0x400003b290) (0x40000d4fa0) Stream removed, broadcasting: 1\nI0819 15:21:58.423610 3255 log.go:181] (0x400003b290) (0x40000d50e0) Stream removed, broadcasting: 3\nI0819 15:21:58.423806 3255 log.go:181] (0x400003b290) (0x4000930000) Stream removed, broadcasting: 5\n" Aug 19 15:21:58.434: INFO: stdout: "" Aug 19 15:21:58.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-7855 execpod9vj69 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31078' Aug 19 15:22:00.101: INFO: stderr: "I0819 15:21:59.996023 3275 log.go:181] (0x4000d10000) (0x4000aec000) Create stream\nI0819 15:22:00.000713 3275 log.go:181] (0x4000d10000) (0x4000aec000) Stream added, broadcasting: 1\nI0819 15:22:00.012618 3275 log.go:181] (0x4000d10000) Reply frame received for 1\nI0819 15:22:00.014156 3275 log.go:181] (0x4000d10000) (0x4000f10000) Create stream\nI0819 15:22:00.014283 3275 log.go:181] (0x4000d10000) (0x4000f10000) Stream added, broadcasting: 3\nI0819 15:22:00.016256 3275 log.go:181] (0x4000d10000) Reply frame received for 3\nI0819 15:22:00.016950 3275 log.go:181] (0x4000d10000) (0x40001b2140) Create stream\nI0819 15:22:00.017089 3275 log.go:181] (0x4000d10000) (0x40001b2140) Stream added, broadcasting: 5\nI0819 15:22:00.018864 3275 log.go:181] (0x4000d10000) Reply frame received for 5\nI0819 15:22:00.083590 3275 log.go:181] (0x4000d10000) Data frame received for 5\nI0819 15:22:00.083968 3275 log.go:181] (0x4000d10000) Data frame received for 3\nI0819 15:22:00.084063 3275 log.go:181] (0x4000f10000) (3) Data frame handling\nI0819 15:22:00.084124 3275 log.go:181] (0x4000d10000) Data frame received for 1\nI0819 15:22:00.084195 3275 log.go:181] (0x4000aec000) (1) Data frame handling\nI0819 15:22:00.084291 3275 log.go:181] (0x40001b2140) (5) Data frame handling\nI0819 15:22:00.084858 3275 log.go:181] (0x40001b2140) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31078\nConnection to 172.18.0.14 31078 port [tcp/31078] succeeded!\nI0819 15:22:00.085957 3275 log.go:181] (0x4000aec000) (1) Data frame sent\nI0819 15:22:00.086281 3275 log.go:181] (0x4000d10000) Data frame received for 5\nI0819 15:22:00.086358 3275 log.go:181] (0x40001b2140) (5) Data frame handling\nI0819 15:22:00.087421 3275 log.go:181] (0x4000d10000) (0x4000aec000) Stream removed, broadcasting: 1\nI0819 15:22:00.089958 3275 log.go:181] (0x4000d10000) Go away received\nI0819 15:22:00.090462 3275 log.go:181] (0x4000d10000) (0x4000aec000) Stream removed, broadcasting: 1\nI0819 15:22:00.091569 3275 log.go:181] (0x4000d10000) (0x4000f10000) Stream removed, broadcasting: 3\nI0819 15:22:00.092002 3275 log.go:181] (0x4000d10000) (0x40001b2140) Stream removed, broadcasting: 5\n" Aug 19 15:22:00.102: INFO: stdout: "" Aug 19 15:22:00.102: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:00.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7855" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:43.463 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":206,"skipped":3043,"failed":0} [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:00.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:22:00.666: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:07.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8192" for this suite. • [SLOW TEST:7.570 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3043,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:07.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a0568d80-b903-40d4-8ec8-f5432ffbaf5a STEP: Creating a pod to test consume secrets Aug 19 15:22:10.409: INFO: Waiting up to 5m0s for pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f" in namespace "secrets-7978" to be "Succeeded or Failed" Aug 19 15:22:10.622: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f": Phase="Pending", Reason="", readiness=false. Elapsed: 212.273791ms Aug 19 15:22:12.897: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.488216443s Aug 19 15:22:15.029: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.619828493s Aug 19 15:22:17.208: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.799066154s Aug 19 15:22:19.238: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.829101812s STEP: Saw pod success Aug 19 15:22:19.239: INFO: Pod "pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f" satisfied condition "Succeeded or Failed" Aug 19 15:22:19.321: INFO: Trying to get logs from node latest-worker pod pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f container secret-volume-test: STEP: delete the pod Aug 19 15:22:19.565: INFO: Waiting for pod pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f to disappear Aug 19 15:22:19.674: INFO: Pod pod-secrets-a5ffff48-423e-461e-8cbd-1c6acc24783f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:19.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7978" for this suite. STEP: Destroying namespace "secret-namespace-975" for this suite. • [SLOW TEST:11.777 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3054,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:19.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9cdc6d72-2a8b-46e8-aa06-bdfb10cdc39f STEP: Creating a pod to test consume secrets Aug 19 15:22:19.990: INFO: Waiting up to 5m0s for pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4" in namespace "secrets-201" to be "Succeeded or Failed" Aug 19 15:22:20.106: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 115.73301ms Aug 19 15:22:22.114: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123689048s Aug 19 15:22:24.122: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131067046s Aug 19 15:22:26.640: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649607352s Aug 19 15:22:28.646: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.655805624s STEP: Saw pod success Aug 19 15:22:28.647: INFO: Pod "pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4" satisfied condition "Succeeded or Failed" Aug 19 15:22:28.658: INFO: Trying to get logs from node latest-worker pod pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4 container secret-volume-test: STEP: delete the pod Aug 19 15:22:28.808: INFO: Waiting for pod pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4 to disappear Aug 19 15:22:28.816: INFO: Pod pod-secrets-ce5e7d11-9ce9-42c5-b09a-6915054c1cd4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:28.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-201" for this suite. • [SLOW TEST:9.106 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3071,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:28.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7ad44822-8779-4fc8-9e9d-4176edf46bef STEP: Creating a pod to test consume secrets Aug 19 15:22:28.901: INFO: Waiting up to 5m0s for pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb" in namespace "secrets-8572" to be "Succeeded or Failed" Aug 19 15:22:28.962: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 60.463068ms Aug 19 15:22:31.442: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540983232s Aug 19 15:22:33.449: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.54796558s Aug 19 15:22:36.011: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb": Phase="Running", Reason="", readiness=true. Elapsed: 7.109738686s Aug 19 15:22:38.242: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.341079773s STEP: Saw pod success Aug 19 15:22:38.242: INFO: Pod "pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb" satisfied condition "Succeeded or Failed" Aug 19 15:22:38.816: INFO: Trying to get logs from node latest-worker pod pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb container secret-volume-test: STEP: delete the pod Aug 19 15:22:39.292: INFO: Waiting for pod pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb to disappear Aug 19 15:22:39.489: INFO: Pod pod-secrets-4dce3946-1ac1-4fba-8f38-c49b9abec8bb no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:39.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8572" for this suite. • [SLOW TEST:10.672 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:39.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:22:39.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b" in namespace "downward-api-4871" to be "Succeeded or Failed" Aug 19 15:22:40.484: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Pending", Reason="", readiness=false. Elapsed: 574.022074ms Aug 19 15:22:42.821: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.910970954s Aug 19 15:22:45.029: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.118705789s Aug 19 15:22:47.391: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.481111916s Aug 19 15:22:49.416: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Running", Reason="", readiness=true. Elapsed: 9.505484666s Aug 19 15:22:51.718: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.808093895s STEP: Saw pod success Aug 19 15:22:51.718: INFO: Pod "downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b" satisfied condition "Succeeded or Failed" Aug 19 15:22:51.723: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b container client-container: STEP: delete the pod Aug 19 15:22:51.917: INFO: Waiting for pod downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b to disappear Aug 19 15:22:51.940: INFO: Pod downwardapi-volume-ec3ac0bc-8046-4c0e-b870-16ae7fa7e12b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4871" for this suite. • [SLOW TEST:12.442 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3105,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:51.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 19 15:22:52.218: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:22:59.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8740" for this suite. • [SLOW TEST:7.344 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":212,"skipped":3106,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:22:59.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-eaac1d5d-e553-452d-bf02-7748f980c8b8 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:10.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2671" for this suite. • [SLOW TEST:10.831 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":213,"skipped":3119,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:10.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8338" for this suite. • [SLOW TEST:18.265 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":214,"skipped":3134,"failed":0} S ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:28.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:28.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4748" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":215,"skipped":3135,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:28.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:23:28.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11" in namespace "projected-133" to be "Succeeded or Failed" Aug 19 15:23:28.994: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 34.930042ms Aug 19 15:23:31.269: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31007682s Aug 19 15:23:33.276: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317232177s Aug 19 15:23:35.467: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.508181041s Aug 19 15:23:37.475: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.51608657s STEP: Saw pod success Aug 19 15:23:37.475: INFO: Pod "downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11" satisfied condition "Succeeded or Failed" Aug 19 15:23:37.481: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11 container client-container: STEP: delete the pod Aug 19 15:23:37.544: INFO: Waiting for pod downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11 to disappear Aug 19 15:23:37.551: INFO: Pod downwardapi-volume-36a7bfa1-fa90-4be6-8591-05f94e8f7b11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:37.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-133" for this suite. • [SLOW TEST:8.698 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:37.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 19 15:23:37.826: INFO: Waiting up to 5m0s for pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154" in namespace "emptydir-8173" to be "Succeeded or Failed" Aug 19 15:23:37.876: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Pending", Reason="", readiness=false. Elapsed: 49.65184ms Aug 19 15:23:40.929: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Pending", Reason="", readiness=false. Elapsed: 3.102421725s Aug 19 15:23:42.965: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Pending", Reason="", readiness=false. Elapsed: 5.138367248s Aug 19 15:23:45.287: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Pending", Reason="", readiness=false. Elapsed: 7.460489994s Aug 19 15:23:47.526: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Running", Reason="", readiness=true. Elapsed: 9.699235489s Aug 19 15:23:49.540: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.713135139s STEP: Saw pod success Aug 19 15:23:49.540: INFO: Pod "pod-7cad63f9-3b8e-49fb-ba67-2b908378a154" satisfied condition "Succeeded or Failed" Aug 19 15:23:49.543: INFO: Trying to get logs from node latest-worker2 pod pod-7cad63f9-3b8e-49fb-ba67-2b908378a154 container test-container: STEP: delete the pod Aug 19 15:23:49.660: INFO: Waiting for pod pod-7cad63f9-3b8e-49fb-ba67-2b908378a154 to disappear Aug 19 15:23:49.667: INFO: Pod pod-7cad63f9-3b8e-49fb-ba67-2b908378a154 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8173" for this suite. • [SLOW TEST:12.109 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:49.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:23:50.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade" in namespace "downward-api-3650" to be "Succeeded or Failed" Aug 19 15:23:50.028: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade": Phase="Pending", Reason="", readiness=false. Elapsed: 22.192817ms Aug 19 15:23:52.047: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041437522s Aug 19 15:23:54.132: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126325232s Aug 19 15:23:56.396: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade": Phase="Running", Reason="", readiness=true. Elapsed: 6.390606469s Aug 19 15:23:58.405: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.399107147s STEP: Saw pod success Aug 19 15:23:58.405: INFO: Pod "downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade" satisfied condition "Succeeded or Failed" Aug 19 15:23:58.411: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade container client-container: STEP: delete the pod Aug 19 15:23:58.446: INFO: Waiting for pod downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade to disappear Aug 19 15:23:58.462: INFO: Pod downwardapi-volume-f4bf1736-e83a-480c-8d46-fb17e6412ade no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:23:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3650" for this suite. • [SLOW TEST:8.799 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":218,"skipped":3190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:23:58.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 15:24:00.782: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 15:24:02.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:24:04.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447440, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 15:24:08.000: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:24:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6511" for this suite. STEP: Destroying namespace "webhook-6511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.109 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":219,"skipped":3244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:24:08.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a6be82ad-c5b2-4cae-9ca3-f5c9de3da11b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a6be82ad-c5b2-4cae-9ca3-f5c9de3da11b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:24:18.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9212" for this suite. • [SLOW TEST:10.336 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:24:18.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 19 15:24:19.074: INFO: Waiting up to 5m0s for pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0" in namespace "var-expansion-459" to be "Succeeded or Failed" Aug 19 15:24:19.150: INFO: Pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0": Phase="Pending", Reason="", readiness=false. Elapsed: 75.995269ms Aug 19 15:24:21.502: INFO: Pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427764162s Aug 19 15:24:23.612: INFO: Pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538170451s Aug 19 15:24:25.620: INFO: Pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.546424496s STEP: Saw pod success Aug 19 15:24:25.621: INFO: Pod "var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0" satisfied condition "Succeeded or Failed" Aug 19 15:24:25.625: INFO: Trying to get logs from node latest-worker2 pod var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0 container dapi-container: STEP: delete the pod Aug 19 15:24:25.665: INFO: Waiting for pod var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0 to disappear Aug 19 15:24:25.679: INFO: Pod var-expansion-922ab767-fa31-4ec9-8760-d9376dad43c0 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:24:25.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-459" for this suite. • [SLOW TEST:6.858 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3315,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:24:25.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 19 15:24:26.008: INFO: Waiting up to 5m0s for pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee" in namespace "emptydir-289" to be "Succeeded or Failed" Aug 19 15:24:26.012: INFO: Pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063752ms Aug 19 15:24:28.042: INFO: Pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034159011s Aug 19 15:24:30.048: INFO: Pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03969397s Aug 19 15:24:32.066: INFO: Pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05764179s STEP: Saw pod success Aug 19 15:24:32.066: INFO: Pod "pod-30eae9a6-0021-43de-b876-c8bea53ffbee" satisfied condition "Succeeded or Failed" Aug 19 15:24:32.076: INFO: Trying to get logs from node latest-worker2 pod pod-30eae9a6-0021-43de-b876-c8bea53ffbee container test-container: STEP: delete the pod Aug 19 15:24:32.105: INFO: Waiting for pod pod-30eae9a6-0021-43de-b876-c8bea53ffbee to disappear Aug 19 15:24:32.117: INFO: Pod pod-30eae9a6-0021-43de-b876-c8bea53ffbee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:24:32.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-289" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3337,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:24:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-fae1a612-e2e4-4908-9b35-b4ad2465577d in namespace container-probe-821 Aug 19 15:24:38.244: INFO: Started pod busybox-fae1a612-e2e4-4908-9b35-b4ad2465577d in namespace container-probe-821 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 15:24:38.248: INFO: Initial restart count of pod busybox-fae1a612-e2e4-4908-9b35-b4ad2465577d is 0 Aug 19 15:25:31.129: INFO: Restart count of pod container-probe-821/busybox-fae1a612-e2e4-4908-9b35-b4ad2465577d is now 1 (52.881205856s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:25:31.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-821" for this suite. • [SLOW TEST:59.052 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:25:31.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:25:31.700: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 19 15:25:31.825: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 19 15:25:36.832: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 19 15:25:38.855: INFO: Creating deployment "test-rolling-update-deployment" Aug 19 15:25:38.861: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 19 15:25:38.927: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 19 15:25:40.939: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 19 15:25:40.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447538, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:25:43.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447538, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:25:44.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447539, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447544, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447538, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:25:46.948: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 19 15:25:46.960: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8460 /apis/apps/v1/namespaces/deployment-8460/deployments/test-rolling-update-deployment baa3a9ed-3f0c-42b7-a10e-ccfd94ee44d8 1526459 1 2020-08-19 15:25:38 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-19 15:25:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 15:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4001ebc3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-19 15:25:39 +0000 UTC,LastTransitionTime:2020-08-19 15:25:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-08-19 15:25:45 +0000 UTC,LastTransitionTime:2020-08-19 15:25:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 19 15:25:46.966: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-8460 /apis/apps/v1/namespaces/deployment-8460/replicasets/test-rolling-update-deployment-c4cb8d6d9 3bf39af5-9d49-4179-bb08-6f10fa45aea6 1526447 1 2020-08-19 15:25:38 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment baa3a9ed-3f0c-42b7-a10e-ccfd94ee44d8 0x4001ebc910 0x4001ebc911}] [] [{kube-controller-manager Update apps/v1 2020-08-19 15:25:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"baa3a9ed-3f0c-42b7-a10e-ccfd94ee44d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4001ebcdc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 19 15:25:46.966: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 19 15:25:46.967: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8460 /apis/apps/v1/namespaces/deployment-8460/replicasets/test-rolling-update-controller d8a762b3-dace-4f66-bef1-d9b489ec38eb 1526457 2 2020-08-19 15:25:31 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment baa3a9ed-3f0c-42b7-a10e-ccfd94ee44d8 0x4001ebc7ff 0x4001ebc810}] [] [{e2e.test Update apps/v1 2020-08-19 15:25:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-19 15:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"baa3a9ed-3f0c-42b7-a10e-ccfd94ee44d8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4001ebc8a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 19 15:25:46.972: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-srlbj" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-srlbj test-rolling-update-deployment-c4cb8d6d9- deployment-8460 /api/v1/namespaces/deployment-8460/pods/test-rolling-update-deployment-c4cb8d6d9-srlbj 1b40db1f-21ad-4fa0-b9dd-6529e19c9e5c 1526446 0 2020-08-19 15:25:38 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 3bf39af5-9d49-4179-bb08-6f10fa45aea6 0x40057dca00 0x40057dca01}] [] [{kube-controller-manager Update v1 2020-08-19 15:25:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bf39af5-9d49-4179-bb08-6f10fa45aea6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 15:25:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hmhwf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hmhwf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hmhwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:25:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:25:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:25:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.103,StartTime:2020-08-19 15:25:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 15:25:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://2046b5fed973fa73567dbfdc15f6a0f9b7bb311be9845d5d0bae2ce71e2c4403,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:25:46.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8460" for this suite. • [SLOW TEST:15.800 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":224,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:25:46.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8760 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 19 15:25:47.143: INFO: Found 0 stateful pods, waiting for 3 Aug 19 15:25:57.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:25:57.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:25:57.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 19 15:26:07.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:07.153: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:07.153: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 19 15:26:07.197: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 19 15:26:17.291: INFO: Updating stateful set ss2 Aug 19 15:26:17.341: INFO: Waiting for Pod statefulset-8760/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 19 15:26:28.073: INFO: Found 2 stateful pods, waiting for 3 Aug 19 15:26:38.154: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:38.155: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:38.155: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Aug 19 15:26:48.081: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:48.081: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:26:48.081: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 19 15:26:48.115: INFO: Updating stateful set ss2 Aug 19 15:26:48.750: INFO: Waiting for Pod statefulset-8760/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 15:26:58.818: INFO: Updating stateful set ss2 Aug 19 15:26:59.363: INFO: Waiting for StatefulSet statefulset-8760/ss2 to complete update Aug 19 15:26:59.364: INFO: Waiting for Pod statefulset-8760/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 15:27:09.375: INFO: Waiting for StatefulSet statefulset-8760/ss2 to complete update Aug 19 15:27:09.375: INFO: Waiting for Pod statefulset-8760/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 19 15:27:19.977: INFO: Waiting for StatefulSet statefulset-8760/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 15:27:29.382: INFO: Deleting all statefulset in ns statefulset-8760 Aug 19 15:27:29.386: INFO: Scaling statefulset ss2 to 0 Aug 19 15:28:09.421: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 15:28:09.426: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:28:09.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8760" for this suite. • [SLOW TEST:142.602 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":225,"skipped":3444,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:28:09.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 19 15:28:10.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1769' Aug 19 15:28:13.992: INFO: stderr: "" Aug 19 15:28:13.992: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 19 15:28:13.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:15.480: INFO: stderr: "" Aug 19 15:28:15.480: INFO: stdout: "update-demo-nautilus-hxn79 update-demo-nautilus-ktlsv " Aug 19 15:28:15.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxn79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:16.877: INFO: stderr: "" Aug 19 15:28:16.878: INFO: stdout: "" Aug 19 15:28:16.878: INFO: update-demo-nautilus-hxn79 is created but not running Aug 19 15:28:21.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:23.252: INFO: stderr: "" Aug 19 15:28:23.253: INFO: stdout: "update-demo-nautilus-hxn79 update-demo-nautilus-ktlsv " Aug 19 15:28:23.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxn79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:24.589: INFO: stderr: "" Aug 19 15:28:24.589: INFO: stdout: "true" Aug 19 15:28:24.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hxn79 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:25.969: INFO: stderr: "" Aug 19 15:28:25.969: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 15:28:25.969: INFO: validating pod update-demo-nautilus-hxn79 Aug 19 15:28:25.976: INFO: got data: { "image": "nautilus.jpg" } Aug 19 15:28:25.976: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 15:28:25.976: INFO: update-demo-nautilus-hxn79 is verified up and running Aug 19 15:28:25.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:27.535: INFO: stderr: "" Aug 19 15:28:27.535: INFO: stdout: "true" Aug 19 15:28:27.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:28.933: INFO: stderr: "" Aug 19 15:28:28.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 15:28:28.934: INFO: validating pod update-demo-nautilus-ktlsv Aug 19 15:28:28.972: INFO: got data: { "image": "nautilus.jpg" } Aug 19 15:28:28.973: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 15:28:28.973: INFO: update-demo-nautilus-ktlsv is verified up and running STEP: scaling down the replication controller Aug 19 15:28:28.986: INFO: scanned /root for discovery docs: Aug 19 15:28:28.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1769' Aug 19 15:28:31.687: INFO: stderr: "" Aug 19 15:28:31.687: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 19 15:28:31.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:33.064: INFO: stderr: "" Aug 19 15:28:33.064: INFO: stdout: "update-demo-nautilus-hxn79 update-demo-nautilus-ktlsv " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 19 15:28:38.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:39.800: INFO: stderr: "" Aug 19 15:28:39.800: INFO: stdout: "update-demo-nautilus-ktlsv " Aug 19 15:28:39.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:41.339: INFO: stderr: "" Aug 19 15:28:41.339: INFO: stdout: "true" Aug 19 15:28:41.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:42.802: INFO: stderr: "" Aug 19 15:28:42.802: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 15:28:42.802: INFO: validating pod update-demo-nautilus-ktlsv Aug 19 15:28:42.870: INFO: got data: { "image": "nautilus.jpg" } Aug 19 15:28:42.871: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 15:28:42.871: INFO: update-demo-nautilus-ktlsv is verified up and running STEP: scaling up the replication controller Aug 19 15:28:42.878: INFO: scanned /root for discovery docs: Aug 19 15:28:42.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1769' Aug 19 15:28:45.660: INFO: stderr: "" Aug 19 15:28:45.660: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 19 15:28:45.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:47.094: INFO: stderr: "" Aug 19 15:28:47.094: INFO: stdout: "update-demo-nautilus-f7rqd update-demo-nautilus-ktlsv " Aug 19 15:28:47.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7rqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:49.118: INFO: stderr: "" Aug 19 15:28:49.118: INFO: stdout: "" Aug 19 15:28:49.118: INFO: update-demo-nautilus-f7rqd is created but not running Aug 19 15:28:54.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1769' Aug 19 15:28:55.598: INFO: stderr: "" Aug 19 15:28:55.598: INFO: stdout: "update-demo-nautilus-f7rqd update-demo-nautilus-ktlsv " Aug 19 15:28:55.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7rqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:57.046: INFO: stderr: "" Aug 19 15:28:57.046: INFO: stdout: "true" Aug 19 15:28:57.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f7rqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:28:58.577: INFO: stderr: "" Aug 19 15:28:58.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 15:28:58.577: INFO: validating pod update-demo-nautilus-f7rqd Aug 19 15:28:58.581: INFO: got data: { "image": "nautilus.jpg" } Aug 19 15:28:58.582: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 15:28:58.582: INFO: update-demo-nautilus-f7rqd is verified up and running Aug 19 15:28:58.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:29:00.326: INFO: stderr: "" Aug 19 15:29:00.326: INFO: stdout: "true" Aug 19 15:29:00.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ktlsv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1769' Aug 19 15:29:01.769: INFO: stderr: "" Aug 19 15:29:01.769: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 19 15:29:01.769: INFO: validating pod update-demo-nautilus-ktlsv Aug 19 15:29:01.773: INFO: got data: { "image": "nautilus.jpg" } Aug 19 15:29:01.773: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 19 15:29:01.773: INFO: update-demo-nautilus-ktlsv is verified up and running STEP: using delete to clean up resources Aug 19 15:29:01.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1769' Aug 19 15:29:03.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 19 15:29:03.274: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 19 15:29:03.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1769' Aug 19 15:29:04.869: INFO: stderr: "No resources found in kubectl-1769 namespace.\n" Aug 19 15:29:04.870: INFO: stdout: "" Aug 19 15:29:04.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1769 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 19 15:29:06.401: INFO: stderr: "" Aug 19 15:29:06.401: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:29:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1769" for this suite. • [SLOW TEST:56.821 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":226,"skipped":3446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:29:06.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-1cb4dda8-bb14-45ec-8111-53be408d56e6 STEP: Creating a pod to test consume configMaps Aug 19 15:29:06.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab" in namespace "projected-6510" to be "Succeeded or Failed" Aug 19 15:29:06.826: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab": Phase="Pending", Reason="", readiness=false. Elapsed: 38.454962ms Aug 19 15:29:08.830: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043077812s Aug 19 15:29:11.022: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234641922s Aug 19 15:29:13.058: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab": Phase="Running", Reason="", readiness=true. Elapsed: 6.270453821s Aug 19 15:29:15.063: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.275960427s STEP: Saw pod success Aug 19 15:29:15.063: INFO: Pod "pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab" satisfied condition "Succeeded or Failed" Aug 19 15:29:15.067: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab container projected-configmap-volume-test: STEP: delete the pod Aug 19 15:29:15.456: INFO: Waiting for pod pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab to disappear Aug 19 15:29:15.823: INFO: Pod pod-projected-configmaps-7e1669ab-f079-4d4b-a563-1b30c72c9fab no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:29:15.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6510" for this suite. • [SLOW TEST:9.422 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3474,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:29:15.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8560/configmap-test-277fd6e4-789d-4cae-a375-a8e29e5dbcfd STEP: Creating a pod to test consume configMaps Aug 19 15:29:16.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf" in namespace "configmap-8560" to be "Succeeded or Failed" Aug 19 15:29:17.447: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 527.196729ms Aug 19 15:29:19.764: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843777828s Aug 19 15:29:21.769: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.849485221s Aug 19 15:29:23.776: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.856419032s Aug 19 15:29:25.982: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.061860104s Aug 19 15:29:28.342: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Running", Reason="", readiness=true. Elapsed: 11.422081309s Aug 19 15:29:30.832: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.911961224s STEP: Saw pod success Aug 19 15:29:30.832: INFO: Pod "pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf" satisfied condition "Succeeded or Failed" Aug 19 15:29:31.250: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf container env-test: STEP: delete the pod Aug 19 15:29:31.460: INFO: Waiting for pod pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf to disappear Aug 19 15:29:31.484: INFO: Pod pod-configmaps-5e91e75d-0adf-4ddf-be49-bde6b0ab12cf no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:29:31.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8560" for this suite. • [SLOW TEST:15.662 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":228,"skipped":3477,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:29:31.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:30:11.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2296" for this suite. STEP: Destroying namespace "nsdeletetest-5249" for this suite. Aug 19 15:30:12.100: INFO: Namespace nsdeletetest-5249 was already deleted STEP: Destroying namespace "nsdeletetest-2919" for this suite. • [SLOW TEST:40.602 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":229,"skipped":3487,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:30:12.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 15:30:22.942: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.946: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.951: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.954: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.965: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.968: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.972: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.976: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:22.986: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:28.138: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.439: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.445: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.450: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.462: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.466: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.470: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.474: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:28.481: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:33.175: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.183: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.187: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.204: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.207: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.210: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.213: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:33.267: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:37.993: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:38.001: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:38.009: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:38.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:38.820: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:38.888: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:39.033: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:39.037: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:39.366: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:43.012: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.425: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.430: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.434: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.523: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.598: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.603: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.607: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:43.616: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:47.992: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:47.996: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.000: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.003: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.178: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: Get "https://172.30.12.66:45453/api/v1/namespaces/dns-2153/pods/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1/proxy/results/wheezy_udp@PodARecord": stream error: stream ID 3053; INTERNAL_ERROR Aug 19 15:30:48.190: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.195: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.200: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.203: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local from pod dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1: the server could not find the requested resource (get pods dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1) Aug 19 15:30:48.211: INFO: Lookups using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2153.svc.cluster.local wheezy_udp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2153.svc.cluster.local jessie_udp@dns-test-service-2.dns-2153.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2153.svc.cluster.local] Aug 19 15:30:53.106: INFO: DNS probes using dns-2153/dns-test-f912d4cc-544b-4c82-922a-0d940a4cb2b1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:30:54.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2153" for this suite. • [SLOW TEST:43.029 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":230,"skipped":3509,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:30:55.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-94978501-28d5-4667-afbf-9a5f6632566c STEP: Creating configMap with name cm-test-opt-upd-08703b42-e0a6-4230-808b-fd00fc195998 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-94978501-28d5-4667-afbf-9a5f6632566c STEP: Updating configmap cm-test-opt-upd-08703b42-e0a6-4230-808b-fd00fc195998 STEP: Creating configMap with name cm-test-opt-create-2537ee50-4a97-4820-aaaa-a4bcba252a81 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:31:06.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-410" for this suite. • [SLOW TEST:11.289 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3518,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:31:06.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:31:06.527: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 19 15:31:17.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1435 create -f -' Aug 19 15:31:34.456: INFO: stderr: "" Aug 19 15:31:34.456: INFO: stdout: "e2e-test-crd-publish-openapi-4369-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 19 15:31:34.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1435 delete e2e-test-crd-publish-openapi-4369-crds test-cr' Aug 19 15:31:35.904: INFO: stderr: "" Aug 19 15:31:35.904: INFO: stdout: "e2e-test-crd-publish-openapi-4369-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 19 15:31:35.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1435 apply -f -' Aug 19 15:31:39.762: INFO: stderr: "" Aug 19 15:31:39.762: INFO: stdout: "e2e-test-crd-publish-openapi-4369-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 19 15:31:39.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1435 delete e2e-test-crd-publish-openapi-4369-crds test-cr' Aug 19 15:31:42.301: INFO: stderr: "" Aug 19 15:31:42.301: INFO: stdout: "e2e-test-crd-publish-openapi-4369-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 19 15:31:42.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4369-crds' Aug 19 15:31:46.221: INFO: stderr: "" Aug 19 15:31:46.221: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4369-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:32:07.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1435" for this suite. • [SLOW TEST:61.686 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":232,"skipped":3523,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:32:08.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 19 15:32:09.126: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 19 15:32:09.384: INFO: Waiting for terminating namespaces to be deleted... Aug 19 15:32:09.388: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 19 15:32:09.393: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:32:09.393: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:32:09.393: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:32:09.393: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 15:32:09.393: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 19 15:32:09.398: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:32:09.399: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:32:09.399: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 19 15:32:09.399: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 19 15:32:09.924: INFO: Pod kindnet-gmpqb requesting resource cpu=100m on Node latest-worker Aug 19 15:32:09.925: INFO: Pod kindnet-grzzh requesting resource cpu=100m on Node latest-worker2 Aug 19 15:32:09.925: INFO: Pod kube-proxy-82wrf requesting resource cpu=0m on Node latest-worker Aug 19 15:32:09.925: INFO: Pod kube-proxy-fjk8r requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Aug 19 15:32:09.925: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Aug 19 15:32:10.017: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688.162cb5242a0b2660], Reason = [Created], Message = [Created container filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688] STEP: Considering event: Type = [Normal], Name = [filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688.162cb523c5bbbcbc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052.162cb5244d8866d4], Reason = [Started], Message = [Started container filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052] STEP: Considering event: Type = [Normal], Name = [filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688.162cb52315b40eb6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1973/filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688.162cb52442c23baa], Reason = [Started], Message = [Started container filler-pod-2bd51fc0-6f7a-4073-897b-b9bee9af6688] STEP: Considering event: Type = [Normal], Name = [filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052.162cb5243e18a49b], Reason = [Created], Message = [Created container filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052] STEP: Considering event: Type = [Normal], Name = [filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052.162cb5231abc6208], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1973/filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f169ffca-7b9c-4c17-89bc-4f4c63642052.162cb523d155509d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Warning], Name = [additional-pod.162cb525094e64b8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162cb5251832fc54], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:32:19.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1973" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.812 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":233,"skipped":3539,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:32:19.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:32:38.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6456" for this suite. • [SLOW TEST:18.732 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":234,"skipped":3561,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:32:38.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:32:40.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-515" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":235,"skipped":3566,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:32:40.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 19 15:32:40.495: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528296 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:32:40.497: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528297 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:32:40.499: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528298 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 19 15:32:52.166: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528347 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:32:52.168: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528349 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 19 15:32:52.169: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5242 /api/v1/namespaces/watch-5242/configmaps/e2e-watch-test-label-changed a36651d5-f6a2-4388-a8e5-df4abff0d14a 1528351 0 2020-08-19 15:32:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-19 15:32:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:32:52.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5242" for this suite. • [SLOW TEST:12.062 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":236,"skipped":3574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:32:52.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f68918d0-655b-485b-929c-df81996595db STEP: Creating a pod to test consume secrets Aug 19 15:32:52.670: INFO: Waiting up to 5m0s for pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97" in namespace "secrets-1147" to be "Succeeded or Failed" Aug 19 15:32:52.750: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97": Phase="Pending", Reason="", readiness=false. Elapsed: 79.296989ms Aug 19 15:32:54.777: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106296193s Aug 19 15:32:57.001: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330731976s Aug 19 15:32:59.006: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335971169s Aug 19 15:33:01.132: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.462009664s STEP: Saw pod success Aug 19 15:33:01.133: INFO: Pod "pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97" satisfied condition "Succeeded or Failed" Aug 19 15:33:01.136: INFO: Trying to get logs from node latest-worker pod pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97 container secret-env-test: STEP: delete the pod Aug 19 15:33:01.738: INFO: Waiting for pod pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97 to disappear Aug 19 15:33:01.850: INFO: Pod pod-secrets-68b069d1-98b3-424f-966b-eddb67772a97 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:33:01.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1147" for this suite. • [SLOW TEST:9.616 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3602,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:33:01.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 15:33:04.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 15:33:06.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447983, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:33:09.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447984, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733447983, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 15:33:12.929: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 19 15:33:13.649: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:33:14.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6806" for this suite. STEP: Destroying namespace "webhook-6806-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.009 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":238,"skipped":3613,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:33:17.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 15:33:24.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 15:33:27.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:33:29.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:33:31.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448004, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 15:33:35.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:33:45.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3295" for this suite. STEP: Destroying namespace "webhook-3295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:28.803 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":239,"skipped":3613,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:33:46.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 19 15:33:54.147: INFO: Successfully updated pod "annotationupdate90734954-d559-4b9a-aece-8f4afbfe5027" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:33:57.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1250" for this suite. • [SLOW TEST:10.734 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3625,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:33:57.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:35:58.159: INFO: Deleting pod "var-expansion-4eb5029f-fe3e-4186-9179-3d0c725e51a9" in namespace "var-expansion-7135" Aug 19 15:35:58.168: INFO: Wait up to 5m0s for pod "var-expansion-4eb5029f-fe3e-4186-9179-3d0c725e51a9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:36:02.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7135" for this suite. • [SLOW TEST:124.796 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":241,"skipped":3636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:36:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:36:13.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3945" for this suite. • [SLOW TEST:11.500 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":242,"skipped":3727,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:36:13.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 19 15:36:13.830: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 19 15:36:13.849: INFO: Waiting for terminating namespaces to be deleted... Aug 19 15:36:13.854: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 19 15:36:13.863: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:36:13.863: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:36:13.863: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:36:13.863: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 15:36:13.863: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 19 15:36:13.871: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:36:13.871: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:36:13.871: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 19 15:36:13.871: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162cb55bdadda742], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162cb55bdf089a9b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:36:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5949" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":243,"skipped":3735,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:36:14.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5885.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5885.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5885.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5885.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 19 15:36:25.158: INFO: DNS probes using dns-5885/dns-test-0da0c73d-c617-4835-a01b-80896525b9a2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:36:25.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5885" for this suite. • [SLOW TEST:10.615 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":244,"skipped":3757,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:36:25.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 19 15:36:26.169: INFO: Waiting up to 5m0s for pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532" in namespace "downward-api-586" to be "Succeeded or Failed" Aug 19 15:36:26.186: INFO: Pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532": Phase="Pending", Reason="", readiness=false. Elapsed: 17.175338ms Aug 19 15:36:28.208: INFO: Pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039025673s Aug 19 15:36:30.238: INFO: Pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06893149s Aug 19 15:36:32.358: INFO: Pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189477007s STEP: Saw pod success Aug 19 15:36:32.358: INFO: Pod "downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532" satisfied condition "Succeeded or Failed" Aug 19 15:36:32.639: INFO: Trying to get logs from node latest-worker pod downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532 container dapi-container: STEP: delete the pod Aug 19 15:36:32.867: INFO: Waiting for pod downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532 to disappear Aug 19 15:36:32.882: INFO: Pod downward-api-e13c6c2b-3563-4e27-9da5-77e53a6a4532 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:36:32.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-586" for this suite. • [SLOW TEST:7.329 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":3760,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:36:32.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 19 15:36:33.090: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 19 15:37:59.895: INFO: >>> kubeConfig: /root/.kube/config Aug 19 15:38:21.073: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:39:37.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3213" for this suite. • [SLOW TEST:184.135 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":246,"skipped":3761,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:39:37.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:39:37.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b" in namespace "downward-api-5210" to be "Succeeded or Failed" Aug 19 15:39:37.811: INFO: Pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b": Phase="Pending", Reason="", readiness=false. Elapsed: 280.976495ms Aug 19 15:39:39.817: INFO: Pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287578586s Aug 19 15:39:41.824: INFO: Pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b": Phase="Running", Reason="", readiness=true. Elapsed: 4.293784228s Aug 19 15:39:43.829: INFO: Pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299517288s STEP: Saw pod success Aug 19 15:39:43.829: INFO: Pod "downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b" satisfied condition "Succeeded or Failed" Aug 19 15:39:43.834: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b container client-container: STEP: delete the pod Aug 19 15:39:43.899: INFO: Waiting for pod downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b to disappear Aug 19 15:39:43.903: INFO: Pod downwardapi-volume-c81dc0e7-623d-47a4-9a4b-ed1775e4fa6b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:39:43.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5210" for this suite. • [SLOW TEST:6.883 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":3771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:39:43.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 15:39:45.852: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 15:39:48.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 15:39:50.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733448385, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 15:39:53.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:39:53.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8983" for this suite. STEP: Destroying namespace "webhook-8983-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.764 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":248,"skipped":3810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:39:53.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 19 15:39:53.762: INFO: Created pod &Pod{ObjectMeta:{dns-4256 dns-4256 /api/v1/namespaces/dns-4256/pods/dns-4256 b2c95a3b-7759-4dd4-9be9-1ec01c9d441f 1529961 0 2020-08-19 15:39:53 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-19 15:39:53 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2w2xt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2w2xt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2w2xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 19 15:39:53.790: INFO: The status of Pod dns-4256 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:39:55.796: INFO: The status of Pod dns-4256 is Pending, waiting for it to be Running (with Ready = true) Aug 19 15:39:57.795: INFO: The status of Pod dns-4256 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 19 15:39:57.796: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4256 PodName:dns-4256 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:39:57.796: INFO: >>> kubeConfig: /root/.kube/config I0819 15:39:57.856846 10 log.go:181] (0x40053e22c0) (0x4001014140) Create stream I0819 15:39:57.856991 10 log.go:181] (0x40053e22c0) (0x4001014140) Stream added, broadcasting: 1 I0819 15:39:57.859832 10 log.go:181] (0x40053e22c0) Reply frame received for 1 I0819 15:39:57.859964 10 log.go:181] (0x40053e22c0) (0x40032cf540) Create stream I0819 15:39:57.860032 10 log.go:181] (0x40053e22c0) (0x40032cf540) Stream added, broadcasting: 3 I0819 15:39:57.861263 10 log.go:181] (0x40053e22c0) Reply frame received for 3 I0819 15:39:57.861360 10 log.go:181] (0x40053e22c0) (0x40010141e0) Create stream I0819 15:39:57.861418 10 log.go:181] (0x40053e22c0) (0x40010141e0) Stream added, broadcasting: 5 I0819 15:39:57.862276 10 log.go:181] (0x40053e22c0) Reply frame received for 5 I0819 15:39:57.959021 10 log.go:181] (0x40053e22c0) Data frame received for 3 I0819 15:39:57.959168 10 log.go:181] (0x40032cf540) (3) Data frame handling I0819 15:39:57.959280 10 log.go:181] (0x40032cf540) (3) Data frame sent I0819 15:39:57.962013 10 log.go:181] (0x40053e22c0) Data frame received for 3 I0819 15:39:57.962177 10 log.go:181] (0x40032cf540) (3) Data frame handling I0819 15:39:57.962617 10 log.go:181] (0x40053e22c0) Data frame received for 5 I0819 15:39:57.962733 10 log.go:181] (0x40010141e0) (5) Data frame handling I0819 15:39:57.964861 10 log.go:181] (0x40053e22c0) Data frame received for 1 I0819 15:39:57.964931 10 log.go:181] (0x4001014140) (1) Data frame handling I0819 15:39:57.965001 10 log.go:181] (0x4001014140) (1) Data frame sent I0819 15:39:57.965078 10 log.go:181] (0x40053e22c0) (0x4001014140) Stream removed, broadcasting: 1 I0819 15:39:57.965172 10 log.go:181] (0x40053e22c0) Go away received I0819 15:39:57.965372 10 log.go:181] (0x40053e22c0) (0x4001014140) Stream removed, broadcasting: 1 I0819 15:39:57.965481 10 log.go:181] (0x40053e22c0) (0x40032cf540) Stream removed, broadcasting: 3 I0819 15:39:57.965545 10 log.go:181] (0x40053e22c0) (0x40010141e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 19 15:39:57.966: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4256 PodName:dns-4256 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:39:57.966: INFO: >>> kubeConfig: /root/.kube/config I0819 15:39:58.016774 10 log.go:181] (0x40042c6bb0) (0x4003769360) Create stream I0819 15:39:58.016884 10 log.go:181] (0x40042c6bb0) (0x4003769360) Stream added, broadcasting: 1 I0819 15:39:58.020000 10 log.go:181] (0x40042c6bb0) Reply frame received for 1 I0819 15:39:58.020217 10 log.go:181] (0x40042c6bb0) (0x4002748000) Create stream I0819 15:39:58.020400 10 log.go:181] (0x40042c6bb0) (0x4002748000) Stream added, broadcasting: 3 I0819 15:39:58.021751 10 log.go:181] (0x40042c6bb0) Reply frame received for 3 I0819 15:39:58.021860 10 log.go:181] (0x40042c6bb0) (0x40037694a0) Create stream I0819 15:39:58.021919 10 log.go:181] (0x40042c6bb0) (0x40037694a0) Stream added, broadcasting: 5 I0819 15:39:58.023057 10 log.go:181] (0x40042c6bb0) Reply frame received for 5 I0819 15:39:58.103430 10 log.go:181] (0x40042c6bb0) Data frame received for 3 I0819 15:39:58.103540 10 log.go:181] (0x4002748000) (3) Data frame handling I0819 15:39:58.103637 10 log.go:181] (0x4002748000) (3) Data frame sent I0819 15:39:58.104189 10 log.go:181] (0x40042c6bb0) Data frame received for 3 I0819 15:39:58.104269 10 log.go:181] (0x4002748000) (3) Data frame handling I0819 15:39:58.104425 10 log.go:181] (0x40042c6bb0) Data frame received for 5 I0819 15:39:58.104543 10 log.go:181] (0x40037694a0) (5) Data frame handling I0819 15:39:58.105634 10 log.go:181] (0x40042c6bb0) Data frame received for 1 I0819 15:39:58.105702 10 log.go:181] (0x4003769360) (1) Data frame handling I0819 15:39:58.105784 10 log.go:181] (0x4003769360) (1) Data frame sent I0819 15:39:58.105972 10 log.go:181] (0x40042c6bb0) (0x4003769360) Stream removed, broadcasting: 1 I0819 15:39:58.106050 10 log.go:181] (0x40042c6bb0) Go away received I0819 15:39:58.106313 10 log.go:181] (0x40042c6bb0) (0x4003769360) Stream removed, broadcasting: 1 I0819 15:39:58.106440 10 log.go:181] (0x40042c6bb0) (0x4002748000) Stream removed, broadcasting: 3 I0819 15:39:58.106513 10 log.go:181] (0x40042c6bb0) (0x40037694a0) Stream removed, broadcasting: 5 Aug 19 15:39:58.106: INFO: Deleting pod dns-4256... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:39:58.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4256" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":249,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:39:58.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5034/configmap-test-db2001e5-e169-4365-a923-b41621c2d7f5 STEP: Creating a pod to test consume configMaps Aug 19 15:39:58.873: INFO: Waiting up to 5m0s for pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f" in namespace "configmap-5034" to be "Succeeded or Failed" Aug 19 15:39:58.911: INFO: Pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.075062ms Aug 19 15:40:01.038: INFO: Pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164324358s Aug 19 15:40:03.097: INFO: Pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f": Phase="Running", Reason="", readiness=true. Elapsed: 4.223278709s Aug 19 15:40:05.102: INFO: Pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228848861s STEP: Saw pod success Aug 19 15:40:05.102: INFO: Pod "pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f" satisfied condition "Succeeded or Failed" Aug 19 15:40:05.107: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f container env-test: STEP: delete the pod Aug 19 15:40:05.247: INFO: Waiting for pod pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f to disappear Aug 19 15:40:05.267: INFO: Pod pod-configmaps-e4af164a-77be-417d-b10c-c046fce8a94f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:40:05.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5034" for this suite. • [SLOW TEST:7.101 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":3887,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:40:05.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:40:05.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9" in namespace "projected-1082" to be "Succeeded or Failed" Aug 19 15:40:05.466: INFO: Pod "downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.356252ms Aug 19 15:40:07.517: INFO: Pod "downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080267971s Aug 19 15:40:09.523: INFO: Pod "downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086788217s STEP: Saw pod success Aug 19 15:40:09.523: INFO: Pod "downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9" satisfied condition "Succeeded or Failed" Aug 19 15:40:09.529: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9 container client-container: STEP: delete the pod Aug 19 15:40:09.558: INFO: Waiting for pod downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9 to disappear Aug 19 15:40:09.566: INFO: Pod downwardapi-volume-d0bbd219-9bfd-4829-9466-6a7f6fdfc8d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:40:09.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1082" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":251,"skipped":3887,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:40:09.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 19 15:40:09.645: INFO: namespace kubectl-7784 Aug 19 15:40:09.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7784' Aug 19 15:40:12.233: INFO: stderr: "" Aug 19 15:40:12.233: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 19 15:40:13.242: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:40:13.243: INFO: Found 0 / 1 Aug 19 15:40:14.325: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:40:14.325: INFO: Found 0 / 1 Aug 19 15:40:15.240: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:40:15.240: INFO: Found 0 / 1 Aug 19 15:40:16.239: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:40:16.240: INFO: Found 1 / 1 Aug 19 15:40:16.240: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 19 15:40:16.245: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:40:16.245: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 19 15:40:16.245: INFO: wait on agnhost-primary startup in kubectl-7784 Aug 19 15:40:16.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs agnhost-primary-qwwlg agnhost-primary --namespace=kubectl-7784' Aug 19 15:40:17.815: INFO: stderr: "" Aug 19 15:40:17.815: INFO: stdout: "Paused\n" STEP: exposing RC Aug 19 15:40:17.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7784' Aug 19 15:40:19.284: INFO: stderr: "" Aug 19 15:40:19.284: INFO: stdout: "service/rm2 exposed\n" Aug 19 15:40:19.305: INFO: Service rm2 in namespace kubectl-7784 found. STEP: exposing service Aug 19 15:40:21.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7784' Aug 19 15:40:22.781: INFO: stderr: "" Aug 19 15:40:22.781: INFO: stdout: "service/rm3 exposed\n" Aug 19 15:40:22.795: INFO: Service rm3 in namespace kubectl-7784 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:40:24.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7784" for this suite. • [SLOW TEST:15.238 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":252,"skipped":3902,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:40:24.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 15:40:24.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4" in namespace "projected-4693" to be "Succeeded or Failed" Aug 19 15:40:24.933: INFO: Pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.451716ms Aug 19 15:40:26.940: INFO: Pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013128834s Aug 19 15:40:29.020: INFO: Pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4": Phase="Running", Reason="", readiness=true. Elapsed: 4.092851066s Aug 19 15:40:31.027: INFO: Pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.100007354s STEP: Saw pod success Aug 19 15:40:31.028: INFO: Pod "downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4" satisfied condition "Succeeded or Failed" Aug 19 15:40:31.033: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4 container client-container: STEP: delete the pod Aug 19 15:40:31.233: INFO: Waiting for pod downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4 to disappear Aug 19 15:40:31.305: INFO: Pod downwardapi-volume-2e4e7909-fa2d-4076-a9c9-d63774a7aec4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:40:31.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4693" for this suite. • [SLOW TEST:6.687 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":3916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:40:31.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 19 15:40:38.033: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-9250 PodName:var-expansion-224e383b-6971-4551-9d95-723244fc2b30 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:40:38.033: INFO: >>> kubeConfig: /root/.kube/config I0819 15:40:38.095325 10 log.go:181] (0x4000543080) (0x40036b5ae0) Create stream I0819 15:40:38.095492 10 log.go:181] (0x4000543080) (0x40036b5ae0) Stream added, broadcasting: 1 I0819 15:40:38.098655 10 log.go:181] (0x4000543080) Reply frame received for 1 I0819 15:40:38.098835 10 log.go:181] (0x4000543080) (0x4003620be0) Create stream I0819 15:40:38.098937 10 log.go:181] (0x4000543080) (0x4003620be0) Stream added, broadcasting: 3 I0819 15:40:38.100264 10 log.go:181] (0x4000543080) Reply frame received for 3 I0819 15:40:38.100376 10 log.go:181] (0x4000543080) (0x40036b5b80) Create stream I0819 15:40:38.100441 10 log.go:181] (0x4000543080) (0x40036b5b80) Stream added, broadcasting: 5 I0819 15:40:38.102119 10 log.go:181] (0x4000543080) Reply frame received for 5 I0819 15:40:38.183686 10 log.go:181] (0x4000543080) Data frame received for 3 I0819 15:40:38.183881 10 log.go:181] (0x4003620be0) (3) Data frame handling I0819 15:40:38.184153 10 log.go:181] (0x4000543080) Data frame received for 5 I0819 15:40:38.184409 10 log.go:181] (0x40036b5b80) (5) Data frame handling I0819 15:40:38.184535 10 log.go:181] (0x4000543080) Data frame received for 1 I0819 15:40:38.184641 10 log.go:181] (0x40036b5ae0) (1) Data frame handling I0819 15:40:38.184819 10 log.go:181] (0x40036b5ae0) (1) Data frame sent I0819 15:40:38.184917 10 log.go:181] (0x4000543080) (0x40036b5ae0) Stream removed, broadcasting: 1 I0819 15:40:38.185037 10 log.go:181] (0x4000543080) Go away received I0819 15:40:38.185366 10 log.go:181] (0x4000543080) (0x40036b5ae0) Stream removed, broadcasting: 1 I0819 15:40:38.185484 10 log.go:181] (0x4000543080) (0x4003620be0) Stream removed, broadcasting: 3 I0819 15:40:38.185575 10 log.go:181] (0x4000543080) (0x40036b5b80) Stream removed, broadcasting: 5 STEP: test for file in mounted path Aug 19 15:40:38.191: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-9250 PodName:var-expansion-224e383b-6971-4551-9d95-723244fc2b30 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 19 15:40:38.191: INFO: >>> kubeConfig: /root/.kube/config I0819 15:40:38.249311 10 log.go:181] (0x40000ffef0) (0x400372ef00) Create stream I0819 15:40:38.249429 10 log.go:181] (0x40000ffef0) (0x400372ef00) Stream added, broadcasting: 1 I0819 15:40:38.252563 10 log.go:181] (0x40000ffef0) Reply frame received for 1 I0819 15:40:38.252836 10 log.go:181] (0x40000ffef0) (0x4002749ea0) Create stream I0819 15:40:38.252923 10 log.go:181] (0x40000ffef0) (0x4002749ea0) Stream added, broadcasting: 3 I0819 15:40:38.254264 10 log.go:181] (0x40000ffef0) Reply frame received for 3 I0819 15:40:38.254426 10 log.go:181] (0x40000ffef0) (0x4003620f00) Create stream I0819 15:40:38.254512 10 log.go:181] (0x40000ffef0) (0x4003620f00) Stream added, broadcasting: 5 I0819 15:40:38.255694 10 log.go:181] (0x40000ffef0) Reply frame received for 5 I0819 15:40:38.310497 10 log.go:181] (0x40000ffef0) Data frame received for 5 I0819 15:40:38.310648 10 log.go:181] (0x4003620f00) (5) Data frame handling I0819 15:40:38.310775 10 log.go:181] (0x40000ffef0) Data frame received for 3 I0819 15:40:38.310916 10 log.go:181] (0x4002749ea0) (3) Data frame handling I0819 15:40:38.311835 10 log.go:181] (0x40000ffef0) Data frame received for 1 I0819 15:40:38.311936 10 log.go:181] (0x400372ef00) (1) Data frame handling I0819 15:40:38.312042 10 log.go:181] (0x400372ef00) (1) Data frame sent I0819 15:40:38.312162 10 log.go:181] (0x40000ffef0) (0x400372ef00) Stream removed, broadcasting: 1 I0819 15:40:38.312276 10 log.go:181] (0x40000ffef0) Go away received I0819 15:40:38.312621 10 log.go:181] (0x40000ffef0) (0x400372ef00) Stream removed, broadcasting: 1 I0819 15:40:38.312808 10 log.go:181] (0x40000ffef0) (0x4002749ea0) Stream removed, broadcasting: 3 I0819 15:40:38.312905 10 log.go:181] (0x40000ffef0) (0x4003620f00) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 19 15:40:38.828: INFO: Successfully updated pod "var-expansion-224e383b-6971-4551-9d95-723244fc2b30" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 19 15:40:38.848: INFO: Deleting pod "var-expansion-224e383b-6971-4551-9d95-723244fc2b30" in namespace "var-expansion-9250" Aug 19 15:40:38.854: INFO: Wait up to 5m0s for pod "var-expansion-224e383b-6971-4551-9d95-723244fc2b30" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:41:20.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9250" for this suite. • [SLOW TEST:49.382 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":254,"skipped":3972,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:41:20.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:41:21.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8537' Aug 19 15:41:25.351: INFO: stderr: "" Aug 19 15:41:25.351: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 19 15:41:25.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8537' Aug 19 15:41:28.213: INFO: stderr: "" Aug 19 15:41:28.213: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 19 15:41:29.222: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:41:29.222: INFO: Found 1 / 1 Aug 19 15:41:29.222: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 19 15:41:29.228: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:41:29.228: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 19 15:41:29.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe pod agnhost-primary-xbk6z --namespace=kubectl-8537' Aug 19 15:41:33.819: INFO: stderr: "" Aug 19 15:41:33.819: INFO: stdout: "Name: agnhost-primary-xbk6z\nNamespace: kubectl-8537\nPriority: 0\nNode: latest-worker2/172.18.0.14\nStart Time: Wed, 19 Aug 2020 15:41:25 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.121\nIPs:\n IP: 10.244.1.121\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://ec6a46a029aaa13b15049d577e62a6ebebefd0785b4bb8454be0e2e1b651f95a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 19 Aug 2020 15:41:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wtrg2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wtrg2:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wtrg2\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s Successfully assigned kubectl-8537/agnhost-primary-xbk6z to latest-worker2\n Normal Pulled 7s kubelet, latest-worker2 Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 6s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 5s kubelet, latest-worker2 Started container agnhost-primary\n" Aug 19 15:41:33.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-8537' Aug 19 15:41:36.306: INFO: stderr: "" Aug 19 15:41:36.307: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8537\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 11s replication-controller Created pod: agnhost-primary-xbk6z\n" Aug 19 15:41:36.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-8537' Aug 19 15:41:37.826: INFO: stderr: "" Aug 19 15:41:37.826: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8537\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.96.147.49\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.121:6379\nSession Affinity: None\nEvents: \n" Aug 19 15:41:37.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 19 15:41:39.437: INFO: stderr: "" Aug 19 15:41:39.437: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:42:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 19 Aug 2020 15:41:33 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 19 Aug 2020 15:36:49 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 19 Aug 2020 15:36:49 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 19 Aug 2020 15:36:49 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 19 Aug 2020 15:36:49 +0000 Sat, 15 Aug 2020 09:42:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 355da13825784523b4a253c23edd1334\n System UUID: 8f367e0f-042b-45ff-9966-5ca6bcc1cc56\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-f7hdg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system coredns-f9fd979d6-vxzgb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d5h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kindnet-qmj2d 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d5h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-proxy-8zfjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\n local-path-storage local-path-provisioner-8b46957d4-csnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d5h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 19 15:41:39.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe namespace kubectl-8537' Aug 19 15:41:40.883: INFO: stderr: "" Aug 19 15:41:40.883: INFO: stdout: "Name: kubectl-8537\nLabels: e2e-framework=kubectl\n e2e-run=cc2da83f-3828-4aa2-8bb2-ad9bc28cd7a9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:41:40.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8537" for this suite. • [SLOW TEST:20.001 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":255,"skipped":3975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:41:40.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 19 15:41:40.975: INFO: Waiting up to 5m0s for pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e" in namespace "downward-api-1069" to be "Succeeded or Failed" Aug 19 15:41:40.992: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.531962ms Aug 19 15:41:43.000: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024538798s Aug 19 15:41:45.808: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.832504971s Aug 19 15:41:48.272: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.296892511s Aug 19 15:41:50.598: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.62251262s Aug 19 15:41:52.807: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.831575661s Aug 19 15:41:54.813: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Running", Reason="", readiness=true. Elapsed: 13.838034099s Aug 19 15:41:57.160: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.184372125s STEP: Saw pod success Aug 19 15:41:57.160: INFO: Pod "downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e" satisfied condition "Succeeded or Failed" Aug 19 15:41:57.165: INFO: Trying to get logs from node latest-worker pod downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e container dapi-container: STEP: delete the pod Aug 19 15:41:57.729: INFO: Waiting for pod downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e to disappear Aug 19 15:41:57.956: INFO: Pod downward-api-2c149fcf-960e-4ac4-9179-edbff673bf3e no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:41:57.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1069" for this suite. • [SLOW TEST:17.075 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4021,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:41:57.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:41:58.974: INFO: Creating ReplicaSet my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f Aug 19 15:41:59.273: INFO: Pod name my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f: Found 0 pods out of 1 Aug 19 15:42:04.304: INFO: Pod name my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f: Found 1 pods out of 1 Aug 19 15:42:04.305: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f" is running Aug 19 15:42:10.493: INFO: Pod "my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f-xr4tl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 15:42:00 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 15:42:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 15:42:00 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-19 15:41:59 +0000 UTC Reason: Message:}]) Aug 19 15:42:10.496: INFO: Trying to dial the pod Aug 19 15:42:15.515: INFO: Controller my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f: Got expected result from replica 1 [my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f-xr4tl]: "my-hostname-basic-e59b0eb4-fdbb-4e82-8e6b-9117a3673c0f-xr4tl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:42:15.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4105" for this suite. • [SLOW TEST:17.555 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":257,"skipped":4033,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:42:15.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Aug 19 15:42:15.600: INFO: Major version: 1 STEP: Confirm minor version Aug 19 15:42:15.600: INFO: cleanMinorVersion: 19 Aug 19 15:42:15.600: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:42:15.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-4930" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":258,"skipped":4048,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:42:15.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:42:31.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3885" for this suite. • [SLOW TEST:16.167 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":259,"skipped":4050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:42:31.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-6701fc85-f4c8-49c5-8010-78a6982dd401 STEP: Creating secret with name s-test-opt-upd-a0dfca26-7832-4ed8-92de-34c2909fab6d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6701fc85-f4c8-49c5-8010-78a6982dd401 STEP: Updating secret s-test-opt-upd-a0dfca26-7832-4ed8-92de-34c2909fab6d STEP: Creating secret with name s-test-opt-create-245c3d4a-d3c2-486b-afeb-38ade92d03d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:42:42.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1997" for this suite. • [SLOW TEST:10.515 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4118,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:42:42.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 19 15:42:42.726: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:44:40.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7678" for this suite. • [SLOW TEST:118.276 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":261,"skipped":4120,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:44:40.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:45:00.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6476" for this suite. • [SLOW TEST:20.097 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":262,"skipped":4131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:45:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 in namespace container-probe-4996 Aug 19 15:45:06.829: INFO: Started pod liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 in namespace container-probe-4996 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 15:45:06.834: INFO: Initial restart count of pod liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is 0 Aug 19 15:45:26.994: INFO: Restart count of pod container-probe-4996/liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is now 1 (20.159121378s elapsed) Aug 19 15:45:45.276: INFO: Restart count of pod container-probe-4996/liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is now 2 (38.441408091s elapsed) Aug 19 15:46:07.609: INFO: Restart count of pod container-probe-4996/liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is now 3 (1m0.774491756s elapsed) Aug 19 15:46:27.714: INFO: Restart count of pod container-probe-4996/liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is now 4 (1m20.8795634s elapsed) Aug 19 15:46:45.773: INFO: Restart count of pod container-probe-4996/liveness-ecd453bb-1ee6-462d-90f5-377ea35c7169 is now 5 (1m38.938310116s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:46:45.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4996" for this suite. • [SLOW TEST:105.322 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4168,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:46:46.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-7b08504e-737f-4446-b9a2-1a1239b3b13d in namespace container-probe-3571 Aug 19 15:46:50.543: INFO: Started pod test-webserver-7b08504e-737f-4446-b9a2-1a1239b3b13d in namespace container-probe-3571 STEP: checking the pod's current state and verifying that restartCount is present Aug 19 15:46:50.549: INFO: Initial restart count of pod test-webserver-7b08504e-737f-4446-b9a2-1a1239b3b13d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:50:52.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3571" for this suite. • [SLOW TEST:246.345 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":264,"skipped":4176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:50:52.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9156 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9156 STEP: Creating statefulset with conflicting port in namespace statefulset-9156 STEP: Waiting until pod test-pod will start running in namespace statefulset-9156 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9156 Aug 19 15:50:58.766: INFO: Observed stateful pod in namespace: statefulset-9156, name: ss-0, uid: ec713afa-8e46-4b33-96b2-f93ca560ff01, status phase: Pending. Waiting for statefulset controller to delete. Aug 19 15:50:59.296: INFO: Observed stateful pod in namespace: statefulset-9156, name: ss-0, uid: ec713afa-8e46-4b33-96b2-f93ca560ff01, status phase: Failed. Waiting for statefulset controller to delete. Aug 19 15:50:59.301: INFO: Observed stateful pod in namespace: statefulset-9156, name: ss-0, uid: ec713afa-8e46-4b33-96b2-f93ca560ff01, status phase: Failed. Waiting for statefulset controller to delete. Aug 19 15:50:59.362: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9156 STEP: Removing pod with conflicting port in namespace statefulset-9156 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9156 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 15:51:05.466: INFO: Deleting all statefulset in ns statefulset-9156 Aug 19 15:51:05.470: INFO: Scaling statefulset ss to 0 Aug 19 15:51:25.513: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 15:51:25.519: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:51:25.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9156" for this suite. • [SLOW TEST:33.183 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":265,"skipped":4218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:51:25.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 19 15:51:25.741: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 19 15:51:25.750: INFO: starting watch STEP: patching STEP: updating Aug 19 15:51:25.769: INFO: waiting for watch events with expected annotations Aug 19 15:51:25.770: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:51:25.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7342" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":266,"skipped":4242,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:51:25.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 19 15:51:25.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-490' Aug 19 15:51:29.216: INFO: stderr: "" Aug 19 15:51:29.216: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 19 15:51:30.225: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:30.225: INFO: Found 0 / 1 Aug 19 15:51:31.546: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:31.546: INFO: Found 0 / 1 Aug 19 15:51:32.239: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:32.239: INFO: Found 0 / 1 Aug 19 15:51:33.481: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:33.481: INFO: Found 1 / 1 Aug 19 15:51:33.481: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 19 15:51:33.488: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:33.488: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 19 15:51:33.489: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config patch pod agnhost-primary-4txqb --namespace=kubectl-490 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 19 15:51:38.281: INFO: stderr: "" Aug 19 15:51:38.281: INFO: stdout: "pod/agnhost-primary-4txqb patched\n" STEP: checking annotations Aug 19 15:51:38.331: INFO: Selector matched 1 pods for map[app:agnhost] Aug 19 15:51:38.331: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:51:38.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-490" for this suite. • [SLOW TEST:12.474 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":267,"skipped":4242,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:51:38.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:51:38.435: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3045 I0819 15:51:38.509699 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3045, replica count: 1 I0819 15:51:39.561515 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:51:40.562266 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:51:41.562918 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 15:51:42.563658 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 15:51:42.704: INFO: Created: latency-svc-nvn2t Aug 19 15:51:42.725: INFO: Got endpoints: latency-svc-nvn2t [58.670379ms] Aug 19 15:51:42.786: INFO: Created: latency-svc-qwv57 Aug 19 15:51:42.802: INFO: Got endpoints: latency-svc-qwv57 [75.418025ms] Aug 19 15:51:42.822: INFO: Created: latency-svc-6x6sd Aug 19 15:51:42.849: INFO: Got endpoints: latency-svc-6x6sd [122.810545ms] Aug 19 15:51:42.935: INFO: Created: latency-svc-jpmkv Aug 19 15:51:42.966: INFO: Got endpoints: latency-svc-jpmkv [239.452377ms] Aug 19 15:51:43.010: INFO: Created: latency-svc-zkpnb Aug 19 15:51:43.023: INFO: Got endpoints: latency-svc-zkpnb [296.985769ms] Aug 19 15:51:43.073: INFO: Created: latency-svc-xppsn Aug 19 15:51:43.084: INFO: Got endpoints: latency-svc-xppsn [356.463443ms] Aug 19 15:51:43.134: INFO: Created: latency-svc-jbrjb Aug 19 15:51:43.150: INFO: Got endpoints: latency-svc-jbrjb [421.87127ms] Aug 19 15:51:43.240: INFO: Created: latency-svc-dqw8k Aug 19 15:51:43.244: INFO: Got endpoints: latency-svc-dqw8k [516.831951ms] Aug 19 15:51:43.314: INFO: Created: latency-svc-7kmc8 Aug 19 15:51:43.332: INFO: Got endpoints: latency-svc-7kmc8 [603.93535ms] Aug 19 15:51:43.379: INFO: Created: latency-svc-bmf45 Aug 19 15:51:43.384: INFO: Got endpoints: latency-svc-bmf45 [657.841905ms] Aug 19 15:51:43.403: INFO: Created: latency-svc-d22tp Aug 19 15:51:43.414: INFO: Got endpoints: latency-svc-d22tp [687.539203ms] Aug 19 15:51:43.433: INFO: Created: latency-svc-7xtfw Aug 19 15:51:43.446: INFO: Got endpoints: latency-svc-7xtfw [717.713585ms] Aug 19 15:51:43.515: INFO: Created: latency-svc-nzbbl Aug 19 15:51:43.521: INFO: Got endpoints: latency-svc-nzbbl [794.092843ms] Aug 19 15:51:43.543: INFO: Created: latency-svc-whzrf Aug 19 15:51:43.556: INFO: Got endpoints: latency-svc-whzrf [829.6189ms] Aug 19 15:51:43.580: INFO: Created: latency-svc-zfg6s Aug 19 15:51:43.604: INFO: Got endpoints: latency-svc-zfg6s [83.015999ms] Aug 19 15:51:43.653: INFO: Created: latency-svc-x4hwt Aug 19 15:51:43.659: INFO: Got endpoints: latency-svc-x4hwt [931.178766ms] Aug 19 15:51:43.700: INFO: Created: latency-svc-zrqzr Aug 19 15:51:43.713: INFO: Got endpoints: latency-svc-zrqzr [984.982286ms] Aug 19 15:51:43.737: INFO: Created: latency-svc-g9hwt Aug 19 15:51:43.852: INFO: Got endpoints: latency-svc-g9hwt [1.049867675s] Aug 19 15:51:43.855: INFO: Created: latency-svc-xhw4t Aug 19 15:51:43.881: INFO: Got endpoints: latency-svc-xhw4t [1.031750477s] Aug 19 15:51:43.917: INFO: Created: latency-svc-5sdlj Aug 19 15:51:43.948: INFO: Got endpoints: latency-svc-5sdlj [981.445214ms] Aug 19 15:51:44.067: INFO: Created: latency-svc-mj4cj Aug 19 15:51:44.076: INFO: Got endpoints: latency-svc-mj4cj [1.051855276s] Aug 19 15:51:44.253: INFO: Created: latency-svc-qc97s Aug 19 15:51:44.266: INFO: Got endpoints: latency-svc-qc97s [1.181593034s] Aug 19 15:51:44.331: INFO: Created: latency-svc-q4skr Aug 19 15:51:44.345: INFO: Got endpoints: latency-svc-q4skr [1.195019328s] Aug 19 15:51:44.409: INFO: Created: latency-svc-qc57d Aug 19 15:51:44.424: INFO: Got endpoints: latency-svc-qc57d [1.180665685s] Aug 19 15:51:44.443: INFO: Created: latency-svc-w4dj7 Aug 19 15:51:44.638: INFO: Got endpoints: latency-svc-w4dj7 [1.306223138s] Aug 19 15:51:44.721: INFO: Created: latency-svc-c59qp Aug 19 15:51:44.861: INFO: Got endpoints: latency-svc-c59qp [1.476734141s] Aug 19 15:51:45.236: INFO: Created: latency-svc-xbdpc Aug 19 15:51:45.344: INFO: Got endpoints: latency-svc-xbdpc [1.929749947s] Aug 19 15:51:45.418: INFO: Created: latency-svc-vk454 Aug 19 15:51:45.561: INFO: Got endpoints: latency-svc-vk454 [2.114609777s] Aug 19 15:51:45.715: INFO: Created: latency-svc-d6wtf Aug 19 15:51:45.763: INFO: Got endpoints: latency-svc-d6wtf [2.206397733s] Aug 19 15:51:45.859: INFO: Created: latency-svc-zxmfj Aug 19 15:51:45.916: INFO: Got endpoints: latency-svc-zxmfj [2.311623413s] Aug 19 15:51:46.008: INFO: Created: latency-svc-jsmqn Aug 19 15:51:46.031: INFO: Got endpoints: latency-svc-jsmqn [2.371861509s] Aug 19 15:51:46.094: INFO: Created: latency-svc-ks4vx Aug 19 15:51:46.102: INFO: Got endpoints: latency-svc-ks4vx [2.389186497s] Aug 19 15:51:46.210: INFO: Created: latency-svc-rxw6t Aug 19 15:51:46.221: INFO: Got endpoints: latency-svc-rxw6t [2.368096964s] Aug 19 15:51:46.247: INFO: Created: latency-svc-lcpzl Aug 19 15:51:46.296: INFO: Got endpoints: latency-svc-lcpzl [2.414425368s] Aug 19 15:51:46.336: INFO: Created: latency-svc-qsjt9 Aug 19 15:51:46.353: INFO: Got endpoints: latency-svc-qsjt9 [2.405412153s] Aug 19 15:51:46.382: INFO: Created: latency-svc-k6pf5 Aug 19 15:51:46.426: INFO: Got endpoints: latency-svc-k6pf5 [2.350315507s] Aug 19 15:51:46.448: INFO: Created: latency-svc-wvbg7 Aug 19 15:51:46.714: INFO: Got endpoints: latency-svc-wvbg7 [2.448086062s] Aug 19 15:51:46.864: INFO: Created: latency-svc-pnx7x Aug 19 15:51:46.894: INFO: Got endpoints: latency-svc-pnx7x [2.549430015s] Aug 19 15:51:46.941: INFO: Created: latency-svc-cbdkp Aug 19 15:51:46.954: INFO: Got endpoints: latency-svc-cbdkp [2.528968175s] Aug 19 15:51:47.002: INFO: Created: latency-svc-zbnrn Aug 19 15:51:47.013: INFO: Got endpoints: latency-svc-zbnrn [2.375085088s] Aug 19 15:51:47.032: INFO: Created: latency-svc-pp9gb Aug 19 15:51:47.273: INFO: Got endpoints: latency-svc-pp9gb [2.41114838s] Aug 19 15:51:47.468: INFO: Created: latency-svc-qbw6z Aug 19 15:51:47.473: INFO: Got endpoints: latency-svc-qbw6z [2.128093796s] Aug 19 15:51:47.727: INFO: Created: latency-svc-dqlwm Aug 19 15:51:47.739: INFO: Got endpoints: latency-svc-dqlwm [2.177791976s] Aug 19 15:51:47.756: INFO: Created: latency-svc-pvg5w Aug 19 15:51:47.770: INFO: Got endpoints: latency-svc-pvg5w [2.007240127s] Aug 19 15:51:47.918: INFO: Created: latency-svc-fq4l2 Aug 19 15:51:47.937: INFO: Got endpoints: latency-svc-fq4l2 [2.020986146s] Aug 19 15:51:48.237: INFO: Created: latency-svc-qlx2p Aug 19 15:51:48.471: INFO: Got endpoints: latency-svc-qlx2p [2.439581065s] Aug 19 15:51:48.471: INFO: Created: latency-svc-xmw8z Aug 19 15:51:48.539: INFO: Got endpoints: latency-svc-xmw8z [2.436995261s] Aug 19 15:51:48.823: INFO: Created: latency-svc-fhxd6 Aug 19 15:51:49.082: INFO: Got endpoints: latency-svc-fhxd6 [2.860607046s] Aug 19 15:51:49.357: INFO: Created: latency-svc-kzjfp Aug 19 15:51:49.582: INFO: Got endpoints: latency-svc-kzjfp [3.285531542s] Aug 19 15:51:49.626: INFO: Created: latency-svc-mkd6b Aug 19 15:51:49.776: INFO: Got endpoints: latency-svc-mkd6b [3.42206016s] Aug 19 15:51:49.871: INFO: Created: latency-svc-bchg2 Aug 19 15:51:50.039: INFO: Got endpoints: latency-svc-bchg2 [3.61251611s] Aug 19 15:51:50.461: INFO: Created: latency-svc-hlcsm Aug 19 15:51:50.635: INFO: Got endpoints: latency-svc-hlcsm [3.920953411s] Aug 19 15:51:50.892: INFO: Created: latency-svc-fpbcx Aug 19 15:51:50.942: INFO: Got endpoints: latency-svc-fpbcx [4.047756509s] Aug 19 15:51:51.050: INFO: Created: latency-svc-n6s8m Aug 19 15:51:51.082: INFO: Got endpoints: latency-svc-n6s8m [4.127791795s] Aug 19 15:51:51.107: INFO: Created: latency-svc-7wgc4 Aug 19 15:51:51.124: INFO: Got endpoints: latency-svc-7wgc4 [4.109998431s] Aug 19 15:51:51.139: INFO: Created: latency-svc-l8c5n Aug 19 15:51:51.175: INFO: Got endpoints: latency-svc-l8c5n [3.901851604s] Aug 19 15:51:51.193: INFO: Created: latency-svc-9t2nm Aug 19 15:51:51.221: INFO: Got endpoints: latency-svc-9t2nm [3.748177533s] Aug 19 15:51:51.241: INFO: Created: latency-svc-8jd9p Aug 19 15:51:51.266: INFO: Got endpoints: latency-svc-8jd9p [3.526974439s] Aug 19 15:51:51.325: INFO: Created: latency-svc-jssjv Aug 19 15:51:51.356: INFO: Created: latency-svc-mjnqj Aug 19 15:51:51.357: INFO: Got endpoints: latency-svc-jssjv [3.586327897s] Aug 19 15:51:51.385: INFO: Got endpoints: latency-svc-mjnqj [3.447434865s] Aug 19 15:51:51.481: INFO: Created: latency-svc-7pvrd Aug 19 15:51:51.494: INFO: Got endpoints: latency-svc-7pvrd [3.02272255s] Aug 19 15:51:51.539: INFO: Created: latency-svc-4m5tj Aug 19 15:51:51.564: INFO: Got endpoints: latency-svc-4m5tj [3.024378145s] Aug 19 15:51:51.643: INFO: Created: latency-svc-zlznr Aug 19 15:51:51.653: INFO: Got endpoints: latency-svc-zlznr [2.570925981s] Aug 19 15:51:51.674: INFO: Created: latency-svc-2r7kr Aug 19 15:51:51.691: INFO: Got endpoints: latency-svc-2r7kr [2.109036873s] Aug 19 15:51:51.709: INFO: Created: latency-svc-2ftgq Aug 19 15:51:51.725: INFO: Got endpoints: latency-svc-2ftgq [1.949441074s] Aug 19 15:51:51.779: INFO: Created: latency-svc-gjsqj Aug 19 15:51:51.783: INFO: Got endpoints: latency-svc-gjsqj [1.744318982s] Aug 19 15:51:51.810: INFO: Created: latency-svc-pmx8c Aug 19 15:51:51.827: INFO: Got endpoints: latency-svc-pmx8c [1.191829953s] Aug 19 15:51:51.840: INFO: Created: latency-svc-vqk2q Aug 19 15:51:51.866: INFO: Got endpoints: latency-svc-vqk2q [923.267085ms] Aug 19 15:51:51.960: INFO: Created: latency-svc-vwhpp Aug 19 15:51:51.964: INFO: Got endpoints: latency-svc-vwhpp [881.694145ms] Aug 19 15:51:51.991: INFO: Created: latency-svc-6z8sm Aug 19 15:51:52.002: INFO: Got endpoints: latency-svc-6z8sm [878.42573ms] Aug 19 15:51:52.021: INFO: Created: latency-svc-x2md4 Aug 19 15:51:52.052: INFO: Got endpoints: latency-svc-x2md4 [876.921724ms] Aug 19 15:51:52.128: INFO: Created: latency-svc-gplzh Aug 19 15:51:52.132: INFO: Got endpoints: latency-svc-gplzh [911.309063ms] Aug 19 15:51:52.157: INFO: Created: latency-svc-x5j6f Aug 19 15:51:52.172: INFO: Got endpoints: latency-svc-x5j6f [905.833025ms] Aug 19 15:51:52.206: INFO: Created: latency-svc-xq4df Aug 19 15:51:52.296: INFO: Got endpoints: latency-svc-xq4df [939.254666ms] Aug 19 15:51:52.316: INFO: Created: latency-svc-kmzl7 Aug 19 15:51:52.326: INFO: Got endpoints: latency-svc-kmzl7 [941.213304ms] Aug 19 15:51:52.343: INFO: Created: latency-svc-mdz4m Aug 19 15:51:52.358: INFO: Got endpoints: latency-svc-mdz4m [863.770081ms] Aug 19 15:51:52.379: INFO: Created: latency-svc-7kwlk Aug 19 15:51:52.433: INFO: Got endpoints: latency-svc-7kwlk [868.781671ms] Aug 19 15:51:52.439: INFO: Created: latency-svc-kws75 Aug 19 15:51:52.454: INFO: Got endpoints: latency-svc-kws75 [800.870233ms] Aug 19 15:51:52.471: INFO: Created: latency-svc-jxbr9 Aug 19 15:51:52.486: INFO: Got endpoints: latency-svc-jxbr9 [794.500553ms] Aug 19 15:51:52.502: INFO: Created: latency-svc-9jnqf Aug 19 15:51:52.530: INFO: Got endpoints: latency-svc-9jnqf [804.520277ms] Aug 19 15:51:52.595: INFO: Created: latency-svc-6hg9t Aug 19 15:51:52.599: INFO: Got endpoints: latency-svc-6hg9t [815.546969ms] Aug 19 15:51:52.659: INFO: Created: latency-svc-f8bbr Aug 19 15:51:52.671: INFO: Got endpoints: latency-svc-f8bbr [843.400234ms] Aug 19 15:51:52.745: INFO: Created: latency-svc-vmktc Aug 19 15:51:52.769: INFO: Created: latency-svc-xnwf2 Aug 19 15:51:52.770: INFO: Got endpoints: latency-svc-vmktc [903.703828ms] Aug 19 15:51:52.785: INFO: Got endpoints: latency-svc-xnwf2 [821.741951ms] Aug 19 15:51:52.806: INFO: Created: latency-svc-nbv8f Aug 19 15:51:52.815: INFO: Got endpoints: latency-svc-nbv8f [812.69466ms] Aug 19 15:51:52.834: INFO: Created: latency-svc-p2fpk Aug 19 15:51:52.880: INFO: Got endpoints: latency-svc-p2fpk [828.505678ms] Aug 19 15:51:52.908: INFO: Created: latency-svc-l2llt Aug 19 15:51:52.924: INFO: Got endpoints: latency-svc-l2llt [791.537559ms] Aug 19 15:51:52.952: INFO: Created: latency-svc-nvvmx Aug 19 15:51:52.960: INFO: Got endpoints: latency-svc-nvvmx [787.481138ms] Aug 19 15:51:52.979: INFO: Created: latency-svc-lvlgm Aug 19 15:51:53.069: INFO: Got endpoints: latency-svc-lvlgm [771.855413ms] Aug 19 15:51:53.071: INFO: Created: latency-svc-mjsst Aug 19 15:51:53.087: INFO: Got endpoints: latency-svc-mjsst [761.215219ms] Aug 19 15:51:53.126: INFO: Created: latency-svc-82g6x Aug 19 15:51:53.149: INFO: Got endpoints: latency-svc-82g6x [790.973654ms] Aug 19 15:51:53.236: INFO: Created: latency-svc-lbqrq Aug 19 15:51:53.243: INFO: Got endpoints: latency-svc-lbqrq [810.37407ms] Aug 19 15:51:53.267: INFO: Created: latency-svc-t9j5b Aug 19 15:51:53.281: INFO: Got endpoints: latency-svc-t9j5b [826.565263ms] Aug 19 15:51:53.297: INFO: Created: latency-svc-gcksn Aug 19 15:51:53.305: INFO: Got endpoints: latency-svc-gcksn [818.673796ms] Aug 19 15:51:53.322: INFO: Created: latency-svc-ljktq Aug 19 15:51:53.380: INFO: Got endpoints: latency-svc-ljktq [849.325771ms] Aug 19 15:51:53.388: INFO: Created: latency-svc-b9mgd Aug 19 15:51:53.395: INFO: Got endpoints: latency-svc-b9mgd [795.718942ms] Aug 19 15:51:53.420: INFO: Created: latency-svc-85ngb Aug 19 15:51:53.432: INFO: Got endpoints: latency-svc-85ngb [760.7686ms] Aug 19 15:51:53.477: INFO: Created: latency-svc-6z8t2 Aug 19 15:51:53.553: INFO: Got endpoints: latency-svc-6z8t2 [782.702568ms] Aug 19 15:51:53.555: INFO: Created: latency-svc-qj2x7 Aug 19 15:51:53.595: INFO: Got endpoints: latency-svc-qj2x7 [808.936064ms] Aug 19 15:51:53.617: INFO: Created: latency-svc-5hw7v Aug 19 15:51:53.636: INFO: Got endpoints: latency-svc-5hw7v [820.924931ms] Aug 19 15:51:53.652: INFO: Created: latency-svc-f95ld Aug 19 15:51:53.723: INFO: Created: latency-svc-78sdv Aug 19 15:51:53.725: INFO: Got endpoints: latency-svc-f95ld [844.640375ms] Aug 19 15:51:53.791: INFO: Got endpoints: latency-svc-78sdv [866.588985ms] Aug 19 15:51:53.895: INFO: Created: latency-svc-5ft8j Aug 19 15:51:53.924: INFO: Got endpoints: latency-svc-5ft8j [964.038239ms] Aug 19 15:51:53.924: INFO: Created: latency-svc-t9vbb Aug 19 15:51:53.951: INFO: Got endpoints: latency-svc-t9vbb [881.908653ms] Aug 19 15:51:53.981: INFO: Created: latency-svc-8lx88 Aug 19 15:51:54.045: INFO: Got endpoints: latency-svc-8lx88 [956.855279ms] Aug 19 15:51:54.079: INFO: Created: latency-svc-c7gpm Aug 19 15:51:54.100: INFO: Got endpoints: latency-svc-c7gpm [951.06359ms] Aug 19 15:51:54.126: INFO: Created: latency-svc-mzbm4 Aug 19 15:51:54.142: INFO: Got endpoints: latency-svc-mzbm4 [898.739433ms] Aug 19 15:51:54.226: INFO: Created: latency-svc-s9bf5 Aug 19 15:51:54.273: INFO: Got endpoints: latency-svc-s9bf5 [991.806151ms] Aug 19 15:51:54.306: INFO: Created: latency-svc-xg8t4 Aug 19 15:51:54.386: INFO: Got endpoints: latency-svc-xg8t4 [1.080891937s] Aug 19 15:51:54.413: INFO: Created: latency-svc-4fcrh Aug 19 15:51:54.427: INFO: Got endpoints: latency-svc-4fcrh [1.046539209s] Aug 19 15:51:54.443: INFO: Created: latency-svc-nkpf6 Aug 19 15:51:54.455: INFO: Got endpoints: latency-svc-nkpf6 [1.060168362s] Aug 19 15:51:54.569: INFO: Created: latency-svc-8z62q Aug 19 15:51:54.575: INFO: Got endpoints: latency-svc-8z62q [1.142335726s] Aug 19 15:51:54.637: INFO: Created: latency-svc-lwm6l Aug 19 15:51:54.648: INFO: Got endpoints: latency-svc-lwm6l [1.094765897s] Aug 19 15:51:54.664: INFO: Created: latency-svc-gdsq5 Aug 19 15:51:54.738: INFO: Got endpoints: latency-svc-gdsq5 [1.142735161s] Aug 19 15:51:54.739: INFO: Created: latency-svc-mnxzt Aug 19 15:51:54.744: INFO: Got endpoints: latency-svc-mnxzt [1.107141771s] Aug 19 15:51:54.762: INFO: Created: latency-svc-9z7xx Aug 19 15:51:54.781: INFO: Got endpoints: latency-svc-9z7xx [1.055283191s] Aug 19 15:51:54.798: INFO: Created: latency-svc-lgn5k Aug 19 15:51:54.810: INFO: Got endpoints: latency-svc-lgn5k [1.018839159s] Aug 19 15:51:54.828: INFO: Created: latency-svc-8z5ws Aug 19 15:51:54.893: INFO: Got endpoints: latency-svc-8z5ws [967.977996ms] Aug 19 15:51:54.924: INFO: Created: latency-svc-7x7sv Aug 19 15:51:54.949: INFO: Got endpoints: latency-svc-7x7sv [998.412621ms] Aug 19 15:51:54.983: INFO: Created: latency-svc-l4glq Aug 19 15:51:55.056: INFO: Got endpoints: latency-svc-l4glq [1.011299386s] Aug 19 15:51:55.057: INFO: Created: latency-svc-47j5c Aug 19 15:51:55.074: INFO: Got endpoints: latency-svc-47j5c [973.875385ms] Aug 19 15:51:55.114: INFO: Created: latency-svc-b9kct Aug 19 15:51:55.131: INFO: Got endpoints: latency-svc-b9kct [988.224764ms] Aug 19 15:51:55.199: INFO: Created: latency-svc-xr42d Aug 19 15:51:55.208: INFO: Got endpoints: latency-svc-xr42d [934.698096ms] Aug 19 15:51:55.230: INFO: Created: latency-svc-hjhsd Aug 19 15:51:55.252: INFO: Got endpoints: latency-svc-hjhsd [866.060545ms] Aug 19 15:51:55.266: INFO: Created: latency-svc-vfclj Aug 19 15:51:55.284: INFO: Got endpoints: latency-svc-vfclj [857.344621ms] Aug 19 15:51:55.361: INFO: Created: latency-svc-k62hn Aug 19 15:51:55.401: INFO: Got endpoints: latency-svc-k62hn [945.621841ms] Aug 19 15:51:55.419: INFO: Created: latency-svc-bh78k Aug 19 15:51:55.432: INFO: Got endpoints: latency-svc-bh78k [856.764167ms] Aug 19 15:51:55.453: INFO: Created: latency-svc-hnrl6 Aug 19 15:51:55.524: INFO: Created: latency-svc-qfxkk Aug 19 15:51:55.525: INFO: Got endpoints: latency-svc-hnrl6 [876.79839ms] Aug 19 15:51:55.534: INFO: Got endpoints: latency-svc-qfxkk [795.864799ms] Aug 19 15:51:55.552: INFO: Created: latency-svc-c694x Aug 19 15:51:55.565: INFO: Got endpoints: latency-svc-c694x [821.517334ms] Aug 19 15:51:55.594: INFO: Created: latency-svc-8g4tx Aug 19 15:51:55.675: INFO: Got endpoints: latency-svc-8g4tx [893.529531ms] Aug 19 15:51:55.680: INFO: Created: latency-svc-wc6gk Aug 19 15:51:55.698: INFO: Got endpoints: latency-svc-wc6gk [887.400186ms] Aug 19 15:51:55.748: INFO: Created: latency-svc-h9gjz Aug 19 15:51:55.758: INFO: Got endpoints: latency-svc-h9gjz [865.43503ms] Aug 19 15:51:55.815: INFO: Created: latency-svc-c6gmh Aug 19 15:51:55.819: INFO: Got endpoints: latency-svc-c6gmh [869.158632ms] Aug 19 15:51:55.839: INFO: Created: latency-svc-784fz Aug 19 15:51:55.853: INFO: Got endpoints: latency-svc-784fz [796.664087ms] Aug 19 15:51:55.882: INFO: Created: latency-svc-2pv57 Aug 19 15:51:55.915: INFO: Got endpoints: latency-svc-2pv57 [840.327838ms] Aug 19 15:51:55.978: INFO: Created: latency-svc-67zmk Aug 19 15:51:55.988: INFO: Got endpoints: latency-svc-67zmk [856.998297ms] Aug 19 15:51:56.021: INFO: Created: latency-svc-49qzf Aug 19 15:51:56.035: INFO: Got endpoints: latency-svc-49qzf [826.805884ms] Aug 19 15:51:56.063: INFO: Created: latency-svc-lqh2t Aug 19 15:51:56.166: INFO: Got endpoints: latency-svc-lqh2t [913.633651ms] Aug 19 15:51:56.167: INFO: Created: latency-svc-946cd Aug 19 15:51:56.185: INFO: Got endpoints: latency-svc-946cd [901.030884ms] Aug 19 15:51:56.207: INFO: Created: latency-svc-8h9sn Aug 19 15:51:56.223: INFO: Got endpoints: latency-svc-8h9sn [821.626123ms] Aug 19 15:51:56.244: INFO: Created: latency-svc-lzmms Aug 19 15:51:56.251: INFO: Got endpoints: latency-svc-lzmms [818.763204ms] Aug 19 15:51:56.313: INFO: Created: latency-svc-pp7d7 Aug 19 15:51:56.330: INFO: Got endpoints: latency-svc-pp7d7 [805.480214ms] Aug 19 15:51:56.347: INFO: Created: latency-svc-ttqf9 Aug 19 15:51:56.371: INFO: Got endpoints: latency-svc-ttqf9 [837.085055ms] Aug 19 15:51:56.395: INFO: Created: latency-svc-7w9lz Aug 19 15:51:56.410: INFO: Got endpoints: latency-svc-7w9lz [844.085947ms] Aug 19 15:51:56.469: INFO: Created: latency-svc-n77kd Aug 19 15:51:56.475: INFO: Got endpoints: latency-svc-n77kd [800.569362ms] Aug 19 15:51:56.494: INFO: Created: latency-svc-vvrrn Aug 19 15:51:56.515: INFO: Got endpoints: latency-svc-vvrrn [817.188267ms] Aug 19 15:51:56.545: INFO: Created: latency-svc-kvwz5 Aug 19 15:51:56.566: INFO: Got endpoints: latency-svc-kvwz5 [807.7971ms] Aug 19 15:51:56.619: INFO: Created: latency-svc-llzkt Aug 19 15:51:56.626: INFO: Got endpoints: latency-svc-llzkt [806.739373ms] Aug 19 15:51:56.663: INFO: Created: latency-svc-4n6xz Aug 19 15:51:56.687: INFO: Got endpoints: latency-svc-4n6xz [833.731222ms] Aug 19 15:51:56.713: INFO: Created: latency-svc-4l2x2 Aug 19 15:51:56.763: INFO: Got endpoints: latency-svc-4l2x2 [847.81333ms] Aug 19 15:51:56.785: INFO: Created: latency-svc-dbzpd Aug 19 15:51:56.802: INFO: Got endpoints: latency-svc-dbzpd [813.381317ms] Aug 19 15:51:56.836: INFO: Created: latency-svc-x2q6t Aug 19 15:51:56.860: INFO: Got endpoints: latency-svc-x2q6t [825.051611ms] Aug 19 15:51:56.906: INFO: Created: latency-svc-54hqp Aug 19 15:51:56.929: INFO: Created: latency-svc-jclzx Aug 19 15:51:56.931: INFO: Got endpoints: latency-svc-54hqp [764.458682ms] Aug 19 15:51:56.959: INFO: Got endpoints: latency-svc-jclzx [773.704872ms] Aug 19 15:51:56.995: INFO: Created: latency-svc-s676w Aug 19 15:51:57.068: INFO: Got endpoints: latency-svc-s676w [844.271795ms] Aug 19 15:51:57.083: INFO: Created: latency-svc-5vsct Aug 19 15:51:57.096: INFO: Got endpoints: latency-svc-5vsct [845.13569ms] Aug 19 15:51:57.124: INFO: Created: latency-svc-twxtp Aug 19 15:51:57.139: INFO: Got endpoints: latency-svc-twxtp [808.56727ms] Aug 19 15:51:57.157: INFO: Created: latency-svc-l72m7 Aug 19 15:51:57.254: INFO: Got endpoints: latency-svc-l72m7 [882.328997ms] Aug 19 15:51:57.258: INFO: Created: latency-svc-6pwx5 Aug 19 15:51:57.289: INFO: Got endpoints: latency-svc-6pwx5 [879.425391ms] Aug 19 15:51:57.347: INFO: Created: latency-svc-2896w Aug 19 15:51:57.349: INFO: Got endpoints: latency-svc-2896w [873.246247ms] Aug 19 15:51:57.433: INFO: Created: latency-svc-2vhn8 Aug 19 15:51:57.446: INFO: Got endpoints: latency-svc-2vhn8 [930.95948ms] Aug 19 15:51:57.462: INFO: Created: latency-svc-xgjd4 Aug 19 15:51:57.476: INFO: Got endpoints: latency-svc-xgjd4 [909.375525ms] Aug 19 15:51:57.547: INFO: Created: latency-svc-tx5xx Aug 19 15:51:57.562: INFO: Got endpoints: latency-svc-tx5xx [936.615942ms] Aug 19 15:51:57.594: INFO: Created: latency-svc-9ktl2 Aug 19 15:51:57.626: INFO: Got endpoints: latency-svc-9ktl2 [939.121973ms] Aug 19 15:51:57.754: INFO: Created: latency-svc-sxqpx Aug 19 15:51:57.765: INFO: Got endpoints: latency-svc-sxqpx [1.001936019s] Aug 19 15:51:57.790: INFO: Created: latency-svc-9rw85 Aug 19 15:51:57.806: INFO: Got endpoints: latency-svc-9rw85 [1.004611586s] Aug 19 15:51:57.905: INFO: Created: latency-svc-t7f86 Aug 19 15:51:57.909: INFO: Got endpoints: latency-svc-t7f86 [1.049368085s] Aug 19 15:51:57.955: INFO: Created: latency-svc-kmrdj Aug 19 15:51:57.962: INFO: Got endpoints: latency-svc-kmrdj [1.031519807s] Aug 19 15:51:57.994: INFO: Created: latency-svc-sqj6v Aug 19 15:51:58.105: INFO: Got endpoints: latency-svc-sqj6v [1.145784433s] Aug 19 15:51:58.109: INFO: Created: latency-svc-tvpn4 Aug 19 15:51:58.119: INFO: Got endpoints: latency-svc-tvpn4 [1.051373533s] Aug 19 15:51:58.140: INFO: Created: latency-svc-24sh2 Aug 19 15:51:58.161: INFO: Got endpoints: latency-svc-24sh2 [1.064492504s] Aug 19 15:51:58.194: INFO: Created: latency-svc-8fbbz Aug 19 15:51:58.248: INFO: Got endpoints: latency-svc-8fbbz [1.108334376s] Aug 19 15:51:58.294: INFO: Created: latency-svc-rfj7r Aug 19 15:51:58.307: INFO: Got endpoints: latency-svc-rfj7r [1.053366869s] Aug 19 15:51:58.344: INFO: Created: latency-svc-fvz5l Aug 19 15:51:58.439: INFO: Got endpoints: latency-svc-fvz5l [1.149426661s] Aug 19 15:51:58.444: INFO: Created: latency-svc-wxfgn Aug 19 15:51:58.458: INFO: Got endpoints: latency-svc-wxfgn [1.108497316s] Aug 19 15:51:58.477: INFO: Created: latency-svc-xx89x Aug 19 15:51:58.507: INFO: Got endpoints: latency-svc-xx89x [1.060533677s] Aug 19 15:51:58.523: INFO: Created: latency-svc-2bnmb Aug 19 15:51:58.611: INFO: Got endpoints: latency-svc-2bnmb [1.134548415s] Aug 19 15:51:58.614: INFO: Created: latency-svc-ddxvg Aug 19 15:51:58.619: INFO: Got endpoints: latency-svc-ddxvg [1.056266088s] Aug 19 15:51:58.668: INFO: Created: latency-svc-l7pfs Aug 19 15:51:58.698: INFO: Got endpoints: latency-svc-l7pfs [1.071790353s] Aug 19 15:51:58.812: INFO: Created: latency-svc-lx6bm Aug 19 15:51:58.866: INFO: Got endpoints: latency-svc-lx6bm [1.100668743s] Aug 19 15:51:58.931: INFO: Created: latency-svc-q68hj Aug 19 15:51:58.934: INFO: Got endpoints: latency-svc-q68hj [1.127008968s] Aug 19 15:51:58.968: INFO: Created: latency-svc-c8wjn Aug 19 15:51:59.005: INFO: Got endpoints: latency-svc-c8wjn [1.095183446s] Aug 19 15:51:59.086: INFO: Created: latency-svc-fktnn Aug 19 15:51:59.142: INFO: Created: latency-svc-mzjkx Aug 19 15:51:59.143: INFO: Got endpoints: latency-svc-fktnn [1.180141227s] Aug 19 15:51:59.154: INFO: Got endpoints: latency-svc-mzjkx [1.048453319s] Aug 19 15:51:59.341: INFO: Created: latency-svc-pjv4s Aug 19 15:51:59.370: INFO: Got endpoints: latency-svc-pjv4s [1.250785038s] Aug 19 15:51:59.494: INFO: Created: latency-svc-kjm4z Aug 19 15:51:59.494: INFO: Got endpoints: latency-svc-kjm4z [1.333674754s] Aug 19 15:51:59.678: INFO: Created: latency-svc-fqstn Aug 19 15:51:59.707: INFO: Got endpoints: latency-svc-fqstn [1.45900318s] Aug 19 15:51:59.737: INFO: Created: latency-svc-d7ktk Aug 19 15:51:59.750: INFO: Got endpoints: latency-svc-d7ktk [1.442088302s] Aug 19 15:51:59.834: INFO: Created: latency-svc-fdm5d Aug 19 15:51:59.885: INFO: Got endpoints: latency-svc-fdm5d [1.445882777s] Aug 19 15:52:00.177: INFO: Created: latency-svc-gn7fd Aug 19 15:52:00.181: INFO: Got endpoints: latency-svc-gn7fd [1.723447664s] Aug 19 15:52:00.337: INFO: Created: latency-svc-4wg6k Aug 19 15:52:00.343: INFO: Got endpoints: latency-svc-4wg6k [1.835326331s] Aug 19 15:52:00.381: INFO: Created: latency-svc-6bjzf Aug 19 15:52:00.403: INFO: Got endpoints: latency-svc-6bjzf [1.791773421s] Aug 19 15:52:00.432: INFO: Created: latency-svc-jthbf Aug 19 15:52:00.494: INFO: Got endpoints: latency-svc-jthbf [1.874666574s] Aug 19 15:52:00.506: INFO: Created: latency-svc-27kkj Aug 19 15:52:00.518: INFO: Got endpoints: latency-svc-27kkj [1.820054306s] Aug 19 15:52:00.548: INFO: Created: latency-svc-jm6tx Aug 19 15:52:00.581: INFO: Got endpoints: latency-svc-jm6tx [1.714839039s] Aug 19 15:52:00.659: INFO: Created: latency-svc-sz8xv Aug 19 15:52:00.667: INFO: Got endpoints: latency-svc-sz8xv [1.733410776s] Aug 19 15:52:00.717: INFO: Created: latency-svc-tmlqm Aug 19 15:52:00.729: INFO: Got endpoints: latency-svc-tmlqm [1.723630965s] Aug 19 15:52:00.797: INFO: Created: latency-svc-c2vdh Aug 19 15:52:00.816: INFO: Got endpoints: latency-svc-c2vdh [1.672899345s] Aug 19 15:52:00.841: INFO: Created: latency-svc-nd76t Aug 19 15:52:00.855: INFO: Got endpoints: latency-svc-nd76t [1.701282714s] Aug 19 15:52:00.888: INFO: Created: latency-svc-x8zl7 Aug 19 15:52:00.947: INFO: Got endpoints: latency-svc-x8zl7 [1.576614738s] Aug 19 15:52:00.948: INFO: Latencies: [75.418025ms 83.015999ms 122.810545ms 239.452377ms 296.985769ms 356.463443ms 421.87127ms 516.831951ms 603.93535ms 657.841905ms 687.539203ms 717.713585ms 760.7686ms 761.215219ms 764.458682ms 771.855413ms 773.704872ms 782.702568ms 787.481138ms 790.973654ms 791.537559ms 794.092843ms 794.500553ms 795.718942ms 795.864799ms 796.664087ms 800.569362ms 800.870233ms 804.520277ms 805.480214ms 806.739373ms 807.7971ms 808.56727ms 808.936064ms 810.37407ms 812.69466ms 813.381317ms 815.546969ms 817.188267ms 818.673796ms 818.763204ms 820.924931ms 821.517334ms 821.626123ms 821.741951ms 825.051611ms 826.565263ms 826.805884ms 828.505678ms 829.6189ms 833.731222ms 837.085055ms 840.327838ms 843.400234ms 844.085947ms 844.271795ms 844.640375ms 845.13569ms 847.81333ms 849.325771ms 856.764167ms 856.998297ms 857.344621ms 863.770081ms 865.43503ms 866.060545ms 866.588985ms 868.781671ms 869.158632ms 873.246247ms 876.79839ms 876.921724ms 878.42573ms 879.425391ms 881.694145ms 881.908653ms 882.328997ms 887.400186ms 893.529531ms 898.739433ms 901.030884ms 903.703828ms 905.833025ms 909.375525ms 911.309063ms 913.633651ms 923.267085ms 930.95948ms 931.178766ms 934.698096ms 936.615942ms 939.121973ms 939.254666ms 941.213304ms 945.621841ms 951.06359ms 956.855279ms 964.038239ms 967.977996ms 973.875385ms 981.445214ms 984.982286ms 988.224764ms 991.806151ms 998.412621ms 1.001936019s 1.004611586s 1.011299386s 1.018839159s 1.031519807s 1.031750477s 1.046539209s 1.048453319s 1.049368085s 1.049867675s 1.051373533s 1.051855276s 1.053366869s 1.055283191s 1.056266088s 1.060168362s 1.060533677s 1.064492504s 1.071790353s 1.080891937s 1.094765897s 1.095183446s 1.100668743s 1.107141771s 1.108334376s 1.108497316s 1.127008968s 1.134548415s 1.142335726s 1.142735161s 1.145784433s 1.149426661s 1.180141227s 1.180665685s 1.181593034s 1.191829953s 1.195019328s 1.250785038s 1.306223138s 1.333674754s 1.442088302s 1.445882777s 1.45900318s 1.476734141s 1.576614738s 1.672899345s 1.701282714s 1.714839039s 1.723447664s 1.723630965s 1.733410776s 1.744318982s 1.791773421s 1.820054306s 1.835326331s 1.874666574s 1.929749947s 1.949441074s 2.007240127s 2.020986146s 2.109036873s 2.114609777s 2.128093796s 2.177791976s 2.206397733s 2.311623413s 2.350315507s 2.368096964s 2.371861509s 2.375085088s 2.389186497s 2.405412153s 2.41114838s 2.414425368s 2.436995261s 2.439581065s 2.448086062s 2.528968175s 2.549430015s 2.570925981s 2.860607046s 3.02272255s 3.024378145s 3.285531542s 3.42206016s 3.447434865s 3.526974439s 3.586327897s 3.61251611s 3.748177533s 3.901851604s 3.920953411s 4.047756509s 4.109998431s 4.127791795s] Aug 19 15:52:00.949: INFO: 50 %ile: 981.445214ms Aug 19 15:52:00.949: INFO: 90 %ile: 2.439581065s Aug 19 15:52:00.950: INFO: 99 %ile: 4.109998431s Aug 19 15:52:00.950: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:00.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3045" for this suite. • [SLOW TEST:22.625 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":268,"skipped":4246,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:00.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 19 15:52:01.093: INFO: Waiting up to 5m0s for pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f" in namespace "containers-9247" to be "Succeeded or Failed" Aug 19 15:52:01.102: INFO: Pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469209ms Aug 19 15:52:03.109: INFO: Pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015729636s Aug 19 15:52:05.117: INFO: Pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f": Phase="Running", Reason="", readiness=true. Elapsed: 4.023656849s Aug 19 15:52:07.152: INFO: Pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058542415s STEP: Saw pod success Aug 19 15:52:07.153: INFO: Pod "client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f" satisfied condition "Succeeded or Failed" Aug 19 15:52:07.172: INFO: Trying to get logs from node latest-worker2 pod client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f container test-container: STEP: delete the pod Aug 19 15:52:07.587: INFO: Waiting for pod client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f to disappear Aug 19 15:52:07.612: INFO: Pod client-containers-4334b1f5-a873-4f8c-a6cb-874bec3ef02f no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:07.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9247" for this suite. • [SLOW TEST:6.671 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:07.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 19 15:52:15.882: INFO: &Pod{ObjectMeta:{send-events-0becfdd1-d869-4ede-8f2a-e485164682c5 events-1606 /api/v1/namespaces/events-1606/pods/send-events-0becfdd1-d869-4ede-8f2a-e485164682c5 bf3a0827-3fe9-404e-a5d1-3b63333fe77c 1534112 0 2020-08-19 15:52:07 +0000 UTC map[name:foo time:731191554] map[] [] [] [{e2e.test Update v1 2020-08-19 15:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-19 15:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.129\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ds66p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ds66p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ds66p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:52:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-19 15:52:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.129,StartTime:2020-08-19 15:52:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-19 15:52:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://8127c426df6bf8b3e823ce8e944f96c1e5e578e402801765760124b89508bc46,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 19 15:52:18.038: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 19 15:52:20.048: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:20.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1606" for this suite. • [SLOW TEST:12.506 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":270,"skipped":4263,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:20.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-d2de33b6-eca1-46bc-829b-4fc001e00bc7 STEP: Creating a pod to test consume configMaps Aug 19 15:52:20.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec" in namespace "configmap-6967" to be "Succeeded or Failed" Aug 19 15:52:20.362: INFO: Pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec": Phase="Pending", Reason="", readiness=false. Elapsed: 38.784819ms Aug 19 15:52:22.405: INFO: Pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082628105s Aug 19 15:52:24.414: INFO: Pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091241891s Aug 19 15:52:26.500: INFO: Pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177259973s STEP: Saw pod success Aug 19 15:52:26.501: INFO: Pod "pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec" satisfied condition "Succeeded or Failed" Aug 19 15:52:26.509: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec container configmap-volume-test: STEP: delete the pod Aug 19 15:52:26.728: INFO: Waiting for pod pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec to disappear Aug 19 15:52:26.758: INFO: Pod pod-configmaps-6ad49417-8303-449c-b68f-6cf8ac3129ec no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6967" for this suite. • [SLOW TEST:6.685 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 15:52:26.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:28.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5451" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":272,"skipped":4292,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:28.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 19 15:52:29.221: INFO: created pod pod-service-account-defaultsa Aug 19 15:52:29.221: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 19 15:52:29.288: INFO: created pod pod-service-account-mountsa Aug 19 15:52:29.288: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 19 15:52:29.305: INFO: created pod pod-service-account-nomountsa Aug 19 15:52:29.305: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 19 15:52:29.338: INFO: created pod pod-service-account-defaultsa-mountspec Aug 19 15:52:29.338: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 19 15:52:29.366: INFO: created pod pod-service-account-mountsa-mountspec Aug 19 15:52:29.366: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 19 15:52:29.462: INFO: created pod pod-service-account-nomountsa-mountspec Aug 19 15:52:29.462: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 19 15:52:29.492: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 19 15:52:29.492: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 19 15:52:29.524: INFO: created pod pod-service-account-mountsa-nomountspec Aug 19 15:52:29.525: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 19 15:52:29.556: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 19 15:52:29.557: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:29.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5711" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":273,"skipped":4303,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:29.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 19 15:52:29.930: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 19 15:52:30.119: INFO: Waiting for terminating namespaces to be deleted... Aug 19 15:52:30.196: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 19 15:52:30.275: INFO: send-events-0becfdd1-d869-4ede-8f2a-e485164682c5 from events-1606 started at 2020-08-19 15:52:07 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.275: INFO: Container p ready: true, restart count 0 Aug 19 15:52:30.275: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.275: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:52:30.275: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.275: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 15:52:30.275: INFO: pod-service-account-defaultsa from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.275: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.276: INFO: pod-service-account-mountsa-mountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.276: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.276: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.276: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.276: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.276: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.276: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 19 15:52:30.320: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.320: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 15:52:30.320: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.320: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 15:52:30.320: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.320: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.320: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.320: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.320: INFO: pod-service-account-mountsa from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.320: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.320: INFO: pod-service-account-nomountsa from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.321: INFO: Container token-test ready: false, restart count 0 Aug 19 15:52:30.321: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-5711 started at 2020-08-19 15:52:29 +0000 UTC (1 container statuses recorded) Aug 19 15:52:30.321: INFO: Container token-test ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6849cbc8-f880-4c7a-8411-ec732af70e11 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6849cbc8-f880-4c7a-8411-ec732af70e11 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6849cbc8-f880-4c7a-8411-ec732af70e11 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:50.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8361" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.797 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":274,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:50.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:52:50.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3411" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":275,"skipped":4339,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:52:51.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-tp7q STEP: Creating a pod to test atomic-volume-subpath Aug 19 15:52:51.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tp7q" in namespace "subpath-6567" to be "Succeeded or Failed" Aug 19 15:52:51.836: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Pending", Reason="", readiness=false. Elapsed: 187.730366ms Aug 19 15:52:54.129: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.480758956s Aug 19 15:52:56.139: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 4.491290004s Aug 19 15:52:58.152: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 6.50423156s Aug 19 15:53:00.229: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 8.581196627s Aug 19 15:53:02.284: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 10.6361314s Aug 19 15:53:04.326: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 12.678127954s Aug 19 15:53:06.452: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 14.804090294s Aug 19 15:53:08.481: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 16.833172605s Aug 19 15:53:10.498: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 18.849990579s Aug 19 15:53:12.573: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 20.924977315s Aug 19 15:53:14.600: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Running", Reason="", readiness=true. Elapsed: 22.952331338s Aug 19 15:53:16.608: INFO: Pod "pod-subpath-test-downwardapi-tp7q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.959909627s STEP: Saw pod success Aug 19 15:53:16.608: INFO: Pod "pod-subpath-test-downwardapi-tp7q" satisfied condition "Succeeded or Failed" Aug 19 15:53:16.614: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-tp7q container test-container-subpath-downwardapi-tp7q: STEP: delete the pod Aug 19 15:53:16.708: INFO: Waiting for pod pod-subpath-test-downwardapi-tp7q to disappear Aug 19 15:53:16.722: INFO: Pod pod-subpath-test-downwardapi-tp7q no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-tp7q Aug 19 15:53:16.722: INFO: Deleting pod "pod-subpath-test-downwardapi-tp7q" in namespace "subpath-6567" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:53:16.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6567" for this suite. • [SLOW TEST:25.385 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":276,"skipped":4340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:53:16.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3156 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-3156 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3156 Aug 19 15:53:17.061: INFO: Found 0 stateful pods, waiting for 1 Aug 19 15:53:27.070: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 19 15:53:27.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 15:53:28.777: INFO: stderr: "I0819 15:53:28.603763 4145 log.go:181] (0x4000c96000) (0x40005bc000) Create stream\nI0819 15:53:28.610803 4145 log.go:181] (0x4000c96000) (0x40005bc000) Stream added, broadcasting: 1\nI0819 15:53:28.626503 4145 log.go:181] (0x4000c96000) Reply frame received for 1\nI0819 15:53:28.627227 4145 log.go:181] (0x4000c96000) (0x4000898dc0) Create stream\nI0819 15:53:28.627304 4145 log.go:181] (0x4000c96000) (0x4000898dc0) Stream added, broadcasting: 3\nI0819 15:53:28.628809 4145 log.go:181] (0x4000c96000) Reply frame received for 3\nI0819 15:53:28.629110 4145 log.go:181] (0x4000c96000) (0x4000299400) Create stream\nI0819 15:53:28.629171 4145 log.go:181] (0x4000c96000) (0x4000299400) Stream added, broadcasting: 5\nI0819 15:53:28.630395 4145 log.go:181] (0x4000c96000) Reply frame received for 5\nI0819 15:53:28.715394 4145 log.go:181] (0x4000c96000) Data frame received for 5\nI0819 15:53:28.715715 4145 log.go:181] (0x4000299400) (5) Data frame handling\nI0819 15:53:28.716417 4145 log.go:181] (0x4000299400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 15:53:28.753196 4145 log.go:181] (0x4000c96000) Data frame received for 3\nI0819 15:53:28.753330 4145 log.go:181] (0x4000898dc0) (3) Data frame handling\nI0819 15:53:28.753453 4145 log.go:181] (0x4000898dc0) (3) Data frame sent\nI0819 15:53:28.753560 4145 log.go:181] (0x4000c96000) Data frame received for 3\nI0819 15:53:28.753769 4145 log.go:181] (0x4000898dc0) (3) Data frame handling\nI0819 15:53:28.754038 4145 log.go:181] (0x4000c96000) Data frame received for 5\nI0819 15:53:28.754198 4145 log.go:181] (0x4000299400) (5) Data frame handling\nI0819 15:53:28.754750 4145 log.go:181] (0x4000c96000) Data frame received for 1\nI0819 15:53:28.754850 4145 log.go:181] (0x40005bc000) (1) Data frame handling\nI0819 15:53:28.754949 4145 log.go:181] (0x40005bc000) (1) Data frame sent\nI0819 15:53:28.757426 4145 log.go:181] (0x4000c96000) (0x40005bc000) Stream removed, broadcasting: 1\nI0819 15:53:28.761769 4145 log.go:181] (0x4000c96000) Go away received\nI0819 15:53:28.762141 4145 log.go:181] (0x4000c96000) (0x40005bc000) Stream removed, broadcasting: 1\nI0819 15:53:28.764712 4145 log.go:181] (0x4000c96000) (0x4000898dc0) Stream removed, broadcasting: 3\nI0819 15:53:28.765373 4145 log.go:181] (0x4000c96000) (0x4000299400) Stream removed, broadcasting: 5\n" Aug 19 15:53:28.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 15:53:28.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 15:53:28.786: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 19 15:53:38.793: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 19 15:53:38.793: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 15:53:38.912: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:53:38.913: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:53:38.914: INFO: ss-1 Pending [] Aug 19 15:53:38.914: INFO: Aug 19 15:53:38.914: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 19 15:53:39.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.895163403s Aug 19 15:53:40.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.884093623s Aug 19 15:53:41.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.873697645s Aug 19 15:53:42.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.858398349s Aug 19 15:53:43.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.847941857s Aug 19 15:53:44.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.837014681s Aug 19 15:53:45.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.826913578s Aug 19 15:53:47.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.818482516s Aug 19 15:53:48.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 808.258999ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3156 Aug 19 15:53:49.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:53:50.658: INFO: stderr: "I0819 15:53:50.526149 4165 log.go:181] (0x4000232370) (0x4000626820) Create stream\nI0819 15:53:50.533267 4165 log.go:181] (0x4000232370) (0x4000626820) Stream added, broadcasting: 1\nI0819 15:53:50.545440 4165 log.go:181] (0x4000232370) Reply frame received for 1\nI0819 15:53:50.546535 4165 log.go:181] (0x4000232370) (0x40009a4000) Create stream\nI0819 15:53:50.546635 4165 log.go:181] (0x4000232370) (0x40009a4000) Stream added, broadcasting: 3\nI0819 15:53:50.548447 4165 log.go:181] (0x4000232370) Reply frame received for 3\nI0819 15:53:50.549168 4165 log.go:181] (0x4000232370) (0x40001f8000) Create stream\nI0819 15:53:50.549299 4165 log.go:181] (0x4000232370) (0x40001f8000) Stream added, broadcasting: 5\nI0819 15:53:50.551019 4165 log.go:181] (0x4000232370) Reply frame received for 5\nI0819 15:53:50.634887 4165 log.go:181] (0x4000232370) Data frame received for 5\nI0819 15:53:50.635384 4165 log.go:181] (0x4000232370) Data frame received for 1\nI0819 15:53:50.635791 4165 log.go:181] (0x4000626820) (1) Data frame handling\nI0819 15:53:50.635970 4165 log.go:181] (0x4000232370) Data frame received for 3\nI0819 15:53:50.636104 4165 log.go:181] (0x40009a4000) (3) Data frame handling\nI0819 15:53:50.636387 4165 log.go:181] (0x40001f8000) (5) Data frame handling\nI0819 15:53:50.637121 4165 log.go:181] (0x4000626820) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0819 15:53:50.638272 4165 log.go:181] (0x40001f8000) (5) Data frame sent\nI0819 15:53:50.638521 4165 log.go:181] (0x4000232370) Data frame received for 5\nI0819 15:53:50.638647 4165 log.go:181] (0x40001f8000) (5) Data frame handling\nI0819 15:53:50.638866 4165 log.go:181] (0x40009a4000) (3) Data frame sent\nI0819 15:53:50.638967 4165 log.go:181] (0x4000232370) Data frame received for 3\nI0819 15:53:50.639048 4165 log.go:181] (0x40009a4000) (3) Data frame handling\nI0819 15:53:50.642050 4165 log.go:181] (0x4000232370) (0x4000626820) Stream removed, broadcasting: 1\nI0819 15:53:50.643475 4165 log.go:181] (0x4000232370) Go away received\nI0819 15:53:50.647636 4165 log.go:181] (0x4000232370) (0x4000626820) Stream removed, broadcasting: 1\nI0819 15:53:50.648125 4165 log.go:181] (0x4000232370) (0x40009a4000) Stream removed, broadcasting: 3\nI0819 15:53:50.648308 4165 log.go:181] (0x4000232370) (0x40001f8000) Stream removed, broadcasting: 5\n" Aug 19 15:53:50.659: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 15:53:50.659: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 15:53:50.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:53:52.288: INFO: stderr: "I0819 15:53:52.171432 4185 log.go:181] (0x40000fc160) (0x40008897c0) Create stream\nI0819 15:53:52.175731 4185 log.go:181] (0x40000fc160) (0x40008897c0) Stream added, broadcasting: 1\nI0819 15:53:52.192484 4185 log.go:181] (0x40000fc160) Reply frame received for 1\nI0819 15:53:52.193426 4185 log.go:181] (0x40000fc160) (0x4000c2fb80) Create stream\nI0819 15:53:52.193504 4185 log.go:181] (0x40000fc160) (0x4000c2fb80) Stream added, broadcasting: 3\nI0819 15:53:52.195182 4185 log.go:181] (0x40000fc160) Reply frame received for 3\nI0819 15:53:52.195392 4185 log.go:181] (0x40000fc160) (0x4000c13a40) Create stream\nI0819 15:53:52.195444 4185 log.go:181] (0x40000fc160) (0x4000c13a40) Stream added, broadcasting: 5\nI0819 15:53:52.196536 4185 log.go:181] (0x40000fc160) Reply frame received for 5\nI0819 15:53:52.264392 4185 log.go:181] (0x40000fc160) Data frame received for 3\nI0819 15:53:52.264896 4185 log.go:181] (0x40000fc160) Data frame received for 5\nI0819 15:53:52.265239 4185 log.go:181] (0x4000c2fb80) (3) Data frame handling\nI0819 15:53:52.265449 4185 log.go:181] (0x4000c13a40) (5) Data frame handling\nI0819 15:53:52.266321 4185 log.go:181] (0x4000c13a40) (5) Data frame sent\nI0819 15:53:52.266795 4185 log.go:181] (0x40000fc160) Data frame received for 5\nI0819 15:53:52.266917 4185 log.go:181] (0x4000c13a40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0819 15:53:52.268117 4185 log.go:181] (0x4000c2fb80) (3) Data frame sent\nI0819 15:53:52.268185 4185 log.go:181] (0x40000fc160) Data frame received for 3\nI0819 15:53:52.268242 4185 log.go:181] (0x4000c2fb80) (3) Data frame handling\nI0819 15:53:52.268890 4185 log.go:181] (0x40000fc160) Data frame received for 1\nI0819 15:53:52.268982 4185 log.go:181] (0x40008897c0) (1) Data frame handling\nI0819 15:53:52.269075 4185 log.go:181] (0x40008897c0) (1) Data frame sent\nI0819 15:53:52.270062 4185 log.go:181] (0x40000fc160) (0x40008897c0) Stream removed, broadcasting: 1\nI0819 15:53:52.272277 4185 log.go:181] (0x40000fc160) Go away received\nI0819 15:53:52.276546 4185 log.go:181] (0x40000fc160) (0x40008897c0) Stream removed, broadcasting: 1\nI0819 15:53:52.276909 4185 log.go:181] (0x40000fc160) (0x4000c2fb80) Stream removed, broadcasting: 3\nI0819 15:53:52.277118 4185 log.go:181] (0x40000fc160) (0x4000c13a40) Stream removed, broadcasting: 5\n" Aug 19 15:53:52.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 15:53:52.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 15:53:52.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:53:53.948: INFO: stderr: "I0819 15:53:53.818233 4205 log.go:181] (0x40000cc0b0) (0x4000848000) Create stream\nI0819 15:53:53.824679 4205 log.go:181] (0x40000cc0b0) (0x4000848000) Stream added, broadcasting: 1\nI0819 15:53:53.836551 4205 log.go:181] (0x40000cc0b0) Reply frame received for 1\nI0819 15:53:53.837876 4205 log.go:181] (0x40000cc0b0) (0x40001b6500) Create stream\nI0819 15:53:53.837976 4205 log.go:181] (0x40000cc0b0) (0x40001b6500) Stream added, broadcasting: 3\nI0819 15:53:53.839744 4205 log.go:181] (0x40000cc0b0) Reply frame received for 3\nI0819 15:53:53.840046 4205 log.go:181] (0x40000cc0b0) (0x4000bce000) Create stream\nI0819 15:53:53.840105 4205 log.go:181] (0x40000cc0b0) (0x4000bce000) Stream added, broadcasting: 5\nI0819 15:53:53.841574 4205 log.go:181] (0x40000cc0b0) Reply frame received for 5\nI0819 15:53:53.926935 4205 log.go:181] (0x40000cc0b0) Data frame received for 1\nI0819 15:53:53.927224 4205 log.go:181] (0x40000cc0b0) Data frame received for 5\nI0819 15:53:53.927561 4205 log.go:181] (0x40000cc0b0) Data frame received for 3\nI0819 15:53:53.928042 4205 log.go:181] (0x40001b6500) (3) Data frame handling\nI0819 15:53:53.928610 4205 log.go:181] (0x4000848000) (1) Data frame handling\nI0819 15:53:53.928879 4205 log.go:181] (0x4000bce000) (5) Data frame handling\nI0819 15:53:53.930270 4205 log.go:181] (0x4000bce000) (5) Data frame sent\nI0819 15:53:53.930517 4205 log.go:181] (0x4000848000) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0819 15:53:53.930771 4205 log.go:181] (0x40001b6500) (3) Data frame sent\nI0819 15:53:53.930837 4205 log.go:181] (0x40000cc0b0) Data frame received for 5\nI0819 15:53:53.930937 4205 log.go:181] (0x4000bce000) (5) Data frame handling\nI0819 15:53:53.931018 4205 log.go:181] (0x40000cc0b0) Data frame received for 3\nI0819 15:53:53.931123 4205 log.go:181] (0x40001b6500) (3) Data frame handling\nI0819 15:53:53.932323 4205 log.go:181] (0x40000cc0b0) (0x4000848000) Stream removed, broadcasting: 1\nI0819 15:53:53.933506 4205 log.go:181] (0x40000cc0b0) Go away received\nI0819 15:53:53.937154 4205 log.go:181] (0x40000cc0b0) (0x4000848000) Stream removed, broadcasting: 1\nI0819 15:53:53.937492 4205 log.go:181] (0x40000cc0b0) (0x40001b6500) Stream removed, broadcasting: 3\nI0819 15:53:53.937726 4205 log.go:181] (0x40000cc0b0) (0x4000bce000) Stream removed, broadcasting: 5\n" Aug 19 15:53:53.949: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 19 15:53:53.949: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 19 15:53:53.957: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:53:53.957: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 19 15:53:53.957: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 19 15:53:53.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 15:53:55.628: INFO: stderr: "I0819 15:53:55.498072 4225 log.go:181] (0x400003a4d0) (0x4000d8e000) Create stream\nI0819 15:53:55.500960 4225 log.go:181] (0x400003a4d0) (0x4000d8e000) Stream added, broadcasting: 1\nI0819 15:53:55.510650 4225 log.go:181] (0x400003a4d0) Reply frame received for 1\nI0819 15:53:55.511682 4225 log.go:181] (0x400003a4d0) (0x40009134a0) Create stream\nI0819 15:53:55.511781 4225 log.go:181] (0x400003a4d0) (0x40009134a0) Stream added, broadcasting: 3\nI0819 15:53:55.513454 4225 log.go:181] (0x400003a4d0) Reply frame received for 3\nI0819 15:53:55.513763 4225 log.go:181] (0x400003a4d0) (0x4000913540) Create stream\nI0819 15:53:55.513831 4225 log.go:181] (0x400003a4d0) (0x4000913540) Stream added, broadcasting: 5\nI0819 15:53:55.515015 4225 log.go:181] (0x400003a4d0) Reply frame received for 5\nI0819 15:53:55.605999 4225 log.go:181] (0x400003a4d0) Data frame received for 3\nI0819 15:53:55.606207 4225 log.go:181] (0x400003a4d0) Data frame received for 5\nI0819 15:53:55.606510 4225 log.go:181] (0x400003a4d0) Data frame received for 1\nI0819 15:53:55.607067 4225 log.go:181] (0x40009134a0) (3) Data frame handling\nI0819 15:53:55.607660 4225 log.go:181] (0x4000913540) (5) Data frame handling\nI0819 15:53:55.608457 4225 log.go:181] (0x4000913540) (5) Data frame sent\nI0819 15:53:55.608679 4225 log.go:181] (0x40009134a0) (3) Data frame sent\nI0819 15:53:55.609252 4225 log.go:181] (0x400003a4d0) Data frame received for 3\nI0819 15:53:55.609451 4225 log.go:181] (0x400003a4d0) Data frame received for 5\nI0819 15:53:55.609654 4225 log.go:181] (0x4000913540) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 15:53:55.610315 4225 log.go:181] (0x40009134a0) (3) Data frame handling\nI0819 15:53:55.610600 4225 log.go:181] (0x4000d8e000) (1) Data frame handling\nI0819 15:53:55.610820 4225 log.go:181] (0x4000d8e000) (1) Data frame sent\nI0819 15:53:55.612365 4225 log.go:181] (0x400003a4d0) (0x4000d8e000) Stream removed, broadcasting: 1\nI0819 15:53:55.615698 4225 log.go:181] (0x400003a4d0) Go away received\nI0819 15:53:55.618058 4225 log.go:181] (0x400003a4d0) (0x4000d8e000) Stream removed, broadcasting: 1\nI0819 15:53:55.618621 4225 log.go:181] (0x400003a4d0) (0x40009134a0) Stream removed, broadcasting: 3\nI0819 15:53:55.618836 4225 log.go:181] (0x400003a4d0) (0x4000913540) Stream removed, broadcasting: 5\n" Aug 19 15:53:55.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 15:53:55.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 15:53:55.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 15:53:57.245: INFO: stderr: "I0819 15:53:57.119072 4246 log.go:181] (0x4000e9c000) (0x4000b9e000) Create stream\nI0819 15:53:57.121644 4246 log.go:181] (0x4000e9c000) (0x4000b9e000) Stream added, broadcasting: 1\nI0819 15:53:57.133845 4246 log.go:181] (0x4000e9c000) Reply frame received for 1\nI0819 15:53:57.134490 4246 log.go:181] (0x4000e9c000) (0x4000c80000) Create stream\nI0819 15:53:57.134559 4246 log.go:181] (0x4000e9c000) (0x4000c80000) Stream added, broadcasting: 3\nI0819 15:53:57.135691 4246 log.go:181] (0x4000e9c000) Reply frame received for 3\nI0819 15:53:57.136011 4246 log.go:181] (0x4000e9c000) (0x4000c800a0) Create stream\nI0819 15:53:57.136081 4246 log.go:181] (0x4000e9c000) (0x4000c800a0) Stream added, broadcasting: 5\nI0819 15:53:57.137501 4246 log.go:181] (0x4000e9c000) Reply frame received for 5\nI0819 15:53:57.189388 4246 log.go:181] (0x4000e9c000) Data frame received for 5\nI0819 15:53:57.189627 4246 log.go:181] (0x4000c800a0) (5) Data frame handling\nI0819 15:53:57.190116 4246 log.go:181] (0x4000c800a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 15:53:57.226402 4246 log.go:181] (0x4000e9c000) Data frame received for 3\nI0819 15:53:57.226494 4246 log.go:181] (0x4000c80000) (3) Data frame handling\nI0819 15:53:57.226555 4246 log.go:181] (0x4000c80000) (3) Data frame sent\nI0819 15:53:57.226608 4246 log.go:181] (0x4000e9c000) Data frame received for 3\nI0819 15:53:57.226668 4246 log.go:181] (0x4000c80000) (3) Data frame handling\nI0819 15:53:57.226853 4246 log.go:181] (0x4000e9c000) Data frame received for 5\nI0819 15:53:57.226913 4246 log.go:181] (0x4000c800a0) (5) Data frame handling\nI0819 15:53:57.228008 4246 log.go:181] (0x4000e9c000) Data frame received for 1\nI0819 15:53:57.228057 4246 log.go:181] (0x4000b9e000) (1) Data frame handling\nI0819 15:53:57.228110 4246 log.go:181] (0x4000b9e000) (1) Data frame sent\nI0819 15:53:57.229602 4246 log.go:181] (0x4000e9c000) (0x4000b9e000) Stream removed, broadcasting: 1\nI0819 15:53:57.232198 4246 log.go:181] (0x4000e9c000) Go away received\nI0819 15:53:57.235237 4246 log.go:181] (0x4000e9c000) (0x4000b9e000) Stream removed, broadcasting: 1\nI0819 15:53:57.235471 4246 log.go:181] (0x4000e9c000) (0x4000c80000) Stream removed, broadcasting: 3\nI0819 15:53:57.235611 4246 log.go:181] (0x4000e9c000) (0x4000c800a0) Stream removed, broadcasting: 5\n" Aug 19 15:53:57.246: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 15:53:57.246: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 15:53:57.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 19 15:53:58.964: INFO: stderr: "I0819 15:53:58.805545 4266 log.go:181] (0x4000aa33f0) (0x40003905a0) Create stream\nI0819 15:53:58.807782 4266 log.go:181] (0x4000aa33f0) (0x40003905a0) Stream added, broadcasting: 1\nI0819 15:53:58.827418 4266 log.go:181] (0x4000aa33f0) Reply frame received for 1\nI0819 15:53:58.828002 4266 log.go:181] (0x4000aa33f0) (0x4000390000) Create stream\nI0819 15:53:58.828065 4266 log.go:181] (0x4000aa33f0) (0x4000390000) Stream added, broadcasting: 3\nI0819 15:53:58.829385 4266 log.go:181] (0x4000aa33f0) Reply frame received for 3\nI0819 15:53:58.829661 4266 log.go:181] (0x4000aa33f0) (0x4000a0e000) Create stream\nI0819 15:53:58.829727 4266 log.go:181] (0x4000aa33f0) (0x4000a0e000) Stream added, broadcasting: 5\nI0819 15:53:58.830732 4266 log.go:181] (0x4000aa33f0) Reply frame received for 5\nI0819 15:53:58.913251 4266 log.go:181] (0x4000aa33f0) Data frame received for 5\nI0819 15:53:58.913629 4266 log.go:181] (0x4000a0e000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0819 15:53:58.915076 4266 log.go:181] (0x4000a0e000) (5) Data frame sent\nI0819 15:53:58.942873 4266 log.go:181] (0x4000aa33f0) Data frame received for 3\nI0819 15:53:58.943012 4266 log.go:181] (0x4000390000) (3) Data frame handling\nI0819 15:53:58.943136 4266 log.go:181] (0x4000390000) (3) Data frame sent\nI0819 15:53:58.943316 4266 log.go:181] (0x4000aa33f0) Data frame received for 5\nI0819 15:53:58.943426 4266 log.go:181] (0x4000a0e000) (5) Data frame handling\nI0819 15:53:58.944003 4266 log.go:181] (0x4000aa33f0) Data frame received for 3\nI0819 15:53:58.944114 4266 log.go:181] (0x4000390000) (3) Data frame handling\nI0819 15:53:58.945297 4266 log.go:181] (0x4000aa33f0) Data frame received for 1\nI0819 15:53:58.945431 4266 log.go:181] (0x40003905a0) (1) Data frame handling\nI0819 15:53:58.945543 4266 log.go:181] (0x40003905a0) (1) Data frame sent\nI0819 15:53:58.946922 4266 log.go:181] (0x4000aa33f0) (0x40003905a0) Stream removed, broadcasting: 1\nI0819 15:53:58.950462 4266 log.go:181] (0x4000aa33f0) Go away received\nI0819 15:53:58.953911 4266 log.go:181] (0x4000aa33f0) (0x40003905a0) Stream removed, broadcasting: 1\nI0819 15:53:58.954514 4266 log.go:181] (0x4000aa33f0) (0x4000390000) Stream removed, broadcasting: 3\nI0819 15:53:58.954753 4266 log.go:181] (0x4000aa33f0) (0x4000a0e000) Stream removed, broadcasting: 5\n" Aug 19 15:53:58.965: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 19 15:53:58.965: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 19 15:53:58.966: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 15:53:58.971: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 19 15:54:08.985: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 19 15:54:08.985: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 19 15:54:08.985: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 19 15:54:09.017: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:09.017: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:09.018: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:09.018: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:09.018: INFO: Aug 19 15:54:09.018: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:10.457: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:10.458: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:10.458: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:10.458: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:10.458: INFO: Aug 19 15:54:10.459: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:11.472: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:11.472: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:11.472: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:11.472: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:11.473: INFO: Aug 19 15:54:11.473: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:12.492: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:12.492: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:12.492: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:12.493: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:12.493: INFO: Aug 19 15:54:12.493: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:13.500: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:13.500: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:13.501: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:13.501: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:13.501: INFO: Aug 19 15:54:13.501: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:15.236: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:15.236: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:15.237: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:15.237: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:15.237: INFO: Aug 19 15:54:15.237: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:16.516: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:16.517: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:16.517: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:16.517: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:16.518: INFO: Aug 19 15:54:16.518: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:17.529: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:17.529: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:17.530: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:17.530: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:17.531: INFO: Aug 19 15:54:17.531: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 19 15:54:18.539: INFO: POD NODE PHASE GRACE CONDITIONS Aug 19 15:54:18.539: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:17 +0000 UTC }] Aug 19 15:54:18.540: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:38 +0000 UTC }] Aug 19 15:54:18.540: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-19 15:53:39 +0000 UTC }] Aug 19 15:54:18.540: INFO: Aug 19 15:54:18.540: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3156 Aug 19 15:54:19.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:54:20.891: INFO: rc: 1 Aug 19 15:54:20.891: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:54:30.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:54:32.245: INFO: rc: 1 Aug 19 15:54:32.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:54:42.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:54:43.847: INFO: rc: 1 Aug 19 15:54:43.847: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:54:53.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:54:55.327: INFO: rc: 1 Aug 19 15:54:55.328: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:55:05.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:55:06.639: INFO: rc: 1 Aug 19 15:55:06.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:55:16.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:55:17.930: INFO: rc: 1 Aug 19 15:55:17.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:55:27.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:55:29.269: INFO: rc: 1 Aug 19 15:55:29.270: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:55:39.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:55:40.532: INFO: rc: 1 Aug 19 15:55:40.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:55:50.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:55:51.860: INFO: rc: 1 Aug 19 15:55:51.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:01.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:03.111: INFO: rc: 1 Aug 19 15:56:03.112: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:13.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:14.418: INFO: rc: 1 Aug 19 15:56:14.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:24.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:25.762: INFO: rc: 1 Aug 19 15:56:25.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:35.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:37.198: INFO: rc: 1 Aug 19 15:56:37.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:47.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:48.589: INFO: rc: 1 Aug 19 15:56:48.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:56:58.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:56:59.972: INFO: rc: 1 Aug 19 15:56:59.972: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:57:09.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:57:11.307: INFO: rc: 1 Aug 19 15:57:11.307: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:57:21.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:57:22.842: INFO: rc: 1 Aug 19 15:57:22.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:57:32.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:57:34.754: INFO: rc: 1 Aug 19 15:57:34.754: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:57:44.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:57:46.095: INFO: rc: 1 Aug 19 15:57:46.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:57:56.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:57:57.631: INFO: rc: 1 Aug 19 15:57:57.631: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:58:07.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:58:08.996: INFO: rc: 1 Aug 19 15:58:08.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:58:18.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:58:20.399: INFO: rc: 1 Aug 19 15:58:20.399: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:58:30.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:58:31.758: INFO: rc: 1 Aug 19 15:58:31.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:58:41.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:58:43.283: INFO: rc: 1 Aug 19 15:58:43.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:58:53.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:58:54.697: INFO: rc: 1 Aug 19 15:58:54.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:59:04.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:59:06.183: INFO: rc: 1 Aug 19 15:59:06.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:59:16.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:59:17.625: INFO: rc: 1 Aug 19 15:59:17.625: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Aug 19 15:59:27.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3156 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 19 15:59:29.015: INFO: rc: 1 Aug 19 15:59:29.016: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Aug 19 15:59:29.016: INFO: Scaling statefulset ss to 0 Aug 19 15:59:29.029: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 15:59:29.032: INFO: Deleting all statefulset in ns statefulset-3156 Aug 19 15:59:29.035: INFO: Scaling statefulset ss to 0 Aug 19 15:59:29.048: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 15:59:29.051: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 15:59:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3156" for this suite. • [SLOW TEST:372.352 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":277,"skipped":4390,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 15:59:29.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 19 15:59:29.891: INFO: Pod name wrapped-volume-race-02d7ac10-cec2-498d-b1ea-2726f7d73277: Found 0 pods out of 5 Aug 19 15:59:34.915: INFO: Pod name wrapped-volume-race-02d7ac10-cec2-498d-b1ea-2726f7d73277: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-02d7ac10-cec2-498d-b1ea-2726f7d73277 in namespace emptydir-wrapper-7240, will wait for the garbage collector to delete the pods Aug 19 15:59:51.542: INFO: Deleting ReplicationController wrapped-volume-race-02d7ac10-cec2-498d-b1ea-2726f7d73277 took: 8.50267ms Aug 19 15:59:51.943: INFO: Terminating ReplicationController wrapped-volume-race-02d7ac10-cec2-498d-b1ea-2726f7d73277 pods took: 400.606715ms STEP: Creating RC which spawns configmap-volume pods Aug 19 16:00:10.445: INFO: Pod name wrapped-volume-race-cd088be9-2d72-4d50-94f4-1f99f3f6c385: Found 0 pods out of 5 Aug 19 16:00:15.461: INFO: Pod name wrapped-volume-race-cd088be9-2d72-4d50-94f4-1f99f3f6c385: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cd088be9-2d72-4d50-94f4-1f99f3f6c385 in namespace emptydir-wrapper-7240, will wait for the garbage collector to delete the pods Aug 19 16:00:31.622: INFO: Deleting ReplicationController wrapped-volume-race-cd088be9-2d72-4d50-94f4-1f99f3f6c385 took: 10.719219ms Aug 19 16:00:32.523: INFO: Terminating ReplicationController wrapped-volume-race-cd088be9-2d72-4d50-94f4-1f99f3f6c385 pods took: 900.795003ms STEP: Creating RC which spawns configmap-volume pods Aug 19 16:00:50.371: INFO: Pod name wrapped-volume-race-7fec9e89-3249-404a-bf8d-f37932fea6d2: Found 0 pods out of 5 Aug 19 16:00:55.388: INFO: Pod name wrapped-volume-race-7fec9e89-3249-404a-bf8d-f37932fea6d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7fec9e89-3249-404a-bf8d-f37932fea6d2 in namespace emptydir-wrapper-7240, will wait for the garbage collector to delete the pods Aug 19 16:01:09.585: INFO: Deleting ReplicationController wrapped-volume-race-7fec9e89-3249-404a-bf8d-f37932fea6d2 took: 9.784546ms Aug 19 16:01:10.086: INFO: Terminating ReplicationController wrapped-volume-race-7fec9e89-3249-404a-bf8d-f37932fea6d2 pods took: 501.230699ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:01:20.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7240" for this suite. • [SLOW TEST:111.565 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":278,"skipped":4396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:01:20.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-dk6n STEP: Creating a pod to test atomic-volume-subpath Aug 19 16:01:20.756: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dk6n" in namespace "subpath-9389" to be "Succeeded or Failed" Aug 19 16:01:20.769: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.336705ms Aug 19 16:01:22.777: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020092129s Aug 19 16:01:24.784: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026930101s Aug 19 16:01:26.859: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 6.102884246s Aug 19 16:01:28.895: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 8.138464774s Aug 19 16:01:31.005: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 10.248052539s Aug 19 16:01:33.041: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 12.28467788s Aug 19 16:01:35.048: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 14.291552082s Aug 19 16:01:37.055: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 16.298716943s Aug 19 16:01:39.062: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 18.305154036s Aug 19 16:01:41.069: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 20.312766196s Aug 19 16:01:43.077: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 22.320541486s Aug 19 16:01:45.177: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Running", Reason="", readiness=true. Elapsed: 24.420262547s Aug 19 16:01:47.184: INFO: Pod "pod-subpath-test-secret-dk6n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.427876992s STEP: Saw pod success Aug 19 16:01:47.185: INFO: Pod "pod-subpath-test-secret-dk6n" satisfied condition "Succeeded or Failed" Aug 19 16:01:47.191: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-dk6n container test-container-subpath-secret-dk6n: STEP: delete the pod Aug 19 16:01:47.232: INFO: Waiting for pod pod-subpath-test-secret-dk6n to disappear Aug 19 16:01:47.245: INFO: Pod pod-subpath-test-secret-dk6n no longer exists STEP: Deleting pod pod-subpath-test-secret-dk6n Aug 19 16:01:47.245: INFO: Deleting pod "pod-subpath-test-secret-dk6n" in namespace "subpath-9389" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:01:47.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9389" for this suite. • [SLOW TEST:26.607 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":279,"skipped":4430,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:01:47.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7493 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-7493 Aug 19 16:01:47.423: INFO: Found 0 stateful pods, waiting for 1 Aug 19 16:01:57.431: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 19 16:01:57.467: INFO: Deleting all statefulset in ns statefulset-7493 Aug 19 16:01:57.499: INFO: Scaling statefulset ss to 0 Aug 19 16:02:07.589: INFO: Waiting for statefulset status.replicas updated to 0 Aug 19 16:02:07.593: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:02:07.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7493" for this suite. • [SLOW TEST:20.345 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":280,"skipped":4430,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:02:07.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 19 16:02:11.852: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:02:11.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6658" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:02:11.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 19 16:02:11.992: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 19 16:02:14.112: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 19 16:02:17.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 16:02:19.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733449734, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 16:02:22.239: INFO: Waited 722.193756ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:02:22.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7700" for this suite. • [SLOW TEST:11.046 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":282,"skipped":4513,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:02:22.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 16:02:23.767: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:02:24.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3385" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":283,"skipped":4530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:02:24.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7afda7ef-df87-4d7e-a584-450c647cf01e STEP: Creating a pod to test consume configMaps Aug 19 16:02:24.924: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a" in namespace "projected-9627" to be "Succeeded or Failed" Aug 19 16:02:24.978: INFO: Pod "pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.060919ms Aug 19 16:02:27.005: INFO: Pod "pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080185156s Aug 19 16:02:29.197: INFO: Pod "pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.272428974s STEP: Saw pod success Aug 19 16:02:29.197: INFO: Pod "pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a" satisfied condition "Succeeded or Failed" Aug 19 16:02:29.223: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a container projected-configmap-volume-test: STEP: delete the pod Aug 19 16:02:29.287: INFO: Waiting for pod pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a to disappear Aug 19 16:02:29.335: INFO: Pod pod-projected-configmaps-5ef1d8d5-6c8c-4c50-a0c0-ffe0d24f844a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:02:29.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9627" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4583,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:02:29.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 19 16:02:29.415: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 19 16:02:29.435: INFO: Waiting for terminating namespaces to be deleted... Aug 19 16:02:29.452: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 19 16:02:29.460: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 16:02:29.460: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 16:02:29.460: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 16:02:29.460: INFO: Container kube-proxy ready: true, restart count 0 Aug 19 16:02:29.460: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 19 16:02:29.468: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 19 16:02:29.468: INFO: Container kindnet-cni ready: true, restart count 0 Aug 19 16:02:29.468: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 19 16:02:29.468: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2e3da299-cf7a-440c-9d0a-59f19f024fcf 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-2e3da299-cf7a-440c-9d0a-59f19f024fcf off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2e3da299-cf7a-440c-9d0a-59f19f024fcf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:07:37.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6474" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.456 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":285,"skipped":4598,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:07:37.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 19 16:07:37.902: INFO: Waiting up to 5m0s for pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e" in namespace "var-expansion-1169" to be "Succeeded or Failed" Aug 19 16:07:37.909: INFO: Pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497916ms Aug 19 16:07:39.918: INFO: Pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015847437s Aug 19 16:07:41.927: INFO: Pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e": Phase="Running", Reason="", readiness=true. Elapsed: 4.024746104s Aug 19 16:07:43.936: INFO: Pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033402402s STEP: Saw pod success Aug 19 16:07:43.936: INFO: Pod "var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e" satisfied condition "Succeeded or Failed" Aug 19 16:07:43.942: INFO: Trying to get logs from node latest-worker2 pod var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e container dapi-container: STEP: delete the pod Aug 19 16:07:43.994: INFO: Waiting for pod var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e to disappear Aug 19 16:07:44.004: INFO: Pod var-expansion-374f1ab8-e3b8-4470-aefe-59fd4e93834e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:07:44.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1169" for this suite. • [SLOW TEST:6.238 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4598,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:07:44.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 16:07:44.211: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 19 16:07:44.301: INFO: Number of nodes with available pods: 0 Aug 19 16:07:44.301: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 19 16:07:44.358: INFO: Number of nodes with available pods: 0 Aug 19 16:07:44.358: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:45.365: INFO: Number of nodes with available pods: 0 Aug 19 16:07:45.365: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:46.366: INFO: Number of nodes with available pods: 0 Aug 19 16:07:46.366: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:47.365: INFO: Number of nodes with available pods: 1 Aug 19 16:07:47.365: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 19 16:07:47.410: INFO: Number of nodes with available pods: 1 Aug 19 16:07:47.410: INFO: Number of running nodes: 0, number of available pods: 1 Aug 19 16:07:48.418: INFO: Number of nodes with available pods: 0 Aug 19 16:07:48.418: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 19 16:07:48.433: INFO: Number of nodes with available pods: 0 Aug 19 16:07:48.433: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:49.440: INFO: Number of nodes with available pods: 0 Aug 19 16:07:49.440: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:50.442: INFO: Number of nodes with available pods: 0 Aug 19 16:07:50.442: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:51.441: INFO: Number of nodes with available pods: 0 Aug 19 16:07:51.441: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:52.442: INFO: Number of nodes with available pods: 0 Aug 19 16:07:52.442: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:53.441: INFO: Number of nodes with available pods: 0 Aug 19 16:07:53.441: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:54.442: INFO: Number of nodes with available pods: 0 Aug 19 16:07:54.442: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:55.441: INFO: Number of nodes with available pods: 0 Aug 19 16:07:55.441: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:56.442: INFO: Number of nodes with available pods: 0 Aug 19 16:07:56.442: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:57.441: INFO: Number of nodes with available pods: 0 Aug 19 16:07:57.441: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:58.440: INFO: Number of nodes with available pods: 0 Aug 19 16:07:58.440: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:07:59.440: INFO: Number of nodes with available pods: 0 Aug 19 16:07:59.440: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:08:00.440: INFO: Number of nodes with available pods: 0 Aug 19 16:08:00.440: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:08:01.593: INFO: Number of nodes with available pods: 0 Aug 19 16:08:01.594: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:08:02.440: INFO: Number of nodes with available pods: 0 Aug 19 16:08:02.440: INFO: Node latest-worker2 is running more than one daemon pod Aug 19 16:08:03.442: INFO: Number of nodes with available pods: 1 Aug 19 16:08:03.442: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-869, will wait for the garbage collector to delete the pods Aug 19 16:08:03.517: INFO: Deleting DaemonSet.extensions daemon-set took: 8.116682ms Aug 19 16:08:03.918: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.776945ms Aug 19 16:08:09.723: INFO: Number of nodes with available pods: 0 Aug 19 16:08:09.723: INFO: Number of running nodes: 0, number of available pods: 0 Aug 19 16:08:09.728: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-869/daemonsets","resourceVersion":"1539274"},"items":null} Aug 19 16:08:09.738: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-869/pods","resourceVersion":"1539274"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:08:09.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-869" for this suite. • [SLOW TEST:25.750 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":287,"skipped":4601,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:08:09.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6436 STEP: creating replication controller nodeport-test in namespace services-6436 I0819 16:08:09.961917 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6436, replica count: 2 I0819 16:08:13.013461 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 16:08:16.014272 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 16:08:16.014: INFO: Creating new exec pod Aug 19 16:08:21.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6436 execpod76lfg -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 19 16:08:26.005: INFO: stderr: "I0819 16:08:25.888878 4847 log.go:181] (0x400003adc0) (0x40001bba40) Create stream\nI0819 16:08:25.891064 4847 log.go:181] (0x400003adc0) (0x40001bba40) Stream added, broadcasting: 1\nI0819 16:08:25.899038 4847 log.go:181] (0x400003adc0) Reply frame received for 1\nI0819 16:08:25.899529 4847 log.go:181] (0x400003adc0) (0x40001bbb80) Create stream\nI0819 16:08:25.899575 4847 log.go:181] (0x400003adc0) (0x40001bbb80) Stream added, broadcasting: 3\nI0819 16:08:25.900808 4847 log.go:181] (0x400003adc0) Reply frame received for 3\nI0819 16:08:25.901083 4847 log.go:181] (0x400003adc0) (0x40009c40a0) Create stream\nI0819 16:08:25.901142 4847 log.go:181] (0x400003adc0) (0x40009c40a0) Stream added, broadcasting: 5\nI0819 16:08:25.902090 4847 log.go:181] (0x400003adc0) Reply frame received for 5\nI0819 16:08:25.963688 4847 log.go:181] (0x400003adc0) Data frame received for 5\nI0819 16:08:25.964112 4847 log.go:181] (0x40009c40a0) (5) Data frame handling\nI0819 16:08:25.964695 4847 log.go:181] (0x400003adc0) Data frame received for 3\nI0819 16:08:25.964845 4847 log.go:181] (0x40001bbb80) (3) Data frame handling\nI0819 16:08:25.964916 4847 log.go:181] (0x400003adc0) Data frame received for 1\nI0819 16:08:25.965000 4847 log.go:181] (0x40001bba40) (1) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI0819 16:08:25.967558 4847 log.go:181] (0x40001bba40) (1) Data frame sent\nI0819 16:08:25.968040 4847 log.go:181] (0x40009c40a0) (5) Data frame sent\nI0819 16:08:25.968194 4847 log.go:181] (0x400003adc0) Data frame received for 5\nI0819 16:08:25.968930 4847 log.go:181] (0x400003adc0) (0x40001bba40) Stream removed, broadcasting: 1\nI0819 16:08:25.969799 4847 log.go:181] (0x40009c40a0) (5) Data frame handling\nI0819 16:08:25.969910 4847 log.go:181] (0x40009c40a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0819 16:08:25.969989 4847 log.go:181] (0x400003adc0) Data frame received for 5\nI0819 16:08:25.970057 4847 log.go:181] (0x40009c40a0) (5) Data frame handling\nI0819 16:08:25.972574 4847 log.go:181] (0x400003adc0) Go away received\nI0819 16:08:25.995852 4847 log.go:181] (0x400003adc0) (0x40001bba40) Stream removed, broadcasting: 1\nI0819 16:08:25.996419 4847 log.go:181] (0x400003adc0) (0x40001bbb80) Stream removed, broadcasting: 3\nI0819 16:08:25.996584 4847 log.go:181] (0x400003adc0) (0x40009c40a0) Stream removed, broadcasting: 5\n" Aug 19 16:08:26.006: INFO: stdout: "" Aug 19 16:08:26.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6436 execpod76lfg -- /bin/sh -x -c nc -zv -t -w 2 10.107.163.140 80' Aug 19 16:08:27.694: INFO: stderr: "I0819 16:08:27.577735 4876 log.go:181] (0x40005d4b00) (0x400074c280) Create stream\nI0819 16:08:27.582285 4876 log.go:181] (0x40005d4b00) (0x400074c280) Stream added, broadcasting: 1\nI0819 16:08:27.593564 4876 log.go:181] (0x40005d4b00) Reply frame received for 1\nI0819 16:08:27.594196 4876 log.go:181] (0x40005d4b00) (0x4000127900) Create stream\nI0819 16:08:27.594264 4876 log.go:181] (0x40005d4b00) (0x4000127900) Stream added, broadcasting: 3\nI0819 16:08:27.595903 4876 log.go:181] (0x40005d4b00) Reply frame received for 3\nI0819 16:08:27.596409 4876 log.go:181] (0x40005d4b00) (0x40001279a0) Create stream\nI0819 16:08:27.596561 4876 log.go:181] (0x40005d4b00) (0x40001279a0) Stream added, broadcasting: 5\nI0819 16:08:27.598453 4876 log.go:181] (0x40005d4b00) Reply frame received for 5\nI0819 16:08:27.670251 4876 log.go:181] (0x40005d4b00) Data frame received for 5\nI0819 16:08:27.670758 4876 log.go:181] (0x40005d4b00) Data frame received for 3\nI0819 16:08:27.670924 4876 log.go:181] (0x4000127900) (3) Data frame handling\nI0819 16:08:27.671030 4876 log.go:181] (0x40005d4b00) Data frame received for 1\nI0819 16:08:27.671129 4876 log.go:181] (0x400074c280) (1) Data frame handling\nI0819 16:08:27.671280 4876 log.go:181] (0x40001279a0) (5) Data frame handling\nI0819 16:08:27.673146 4876 log.go:181] (0x400074c280) (1) Data frame sent\nI0819 16:08:27.673269 4876 log.go:181] (0x40001279a0) (5) Data frame sent\nI0819 16:08:27.673458 4876 log.go:181] (0x40005d4b00) Data frame received for 5\nI0819 16:08:27.673536 4876 log.go:181] (0x40001279a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.163.140 80\nConnection to 10.107.163.140 80 port [tcp/http] succeeded!\nI0819 16:08:27.676608 4876 log.go:181] (0x40005d4b00) (0x400074c280) Stream removed, broadcasting: 1\nI0819 16:08:27.678273 4876 log.go:181] (0x40005d4b00) Go away received\nI0819 16:08:27.682230 4876 log.go:181] (0x40005d4b00) (0x400074c280) Stream removed, broadcasting: 1\nI0819 16:08:27.682507 4876 log.go:181] (0x40005d4b00) (0x4000127900) Stream removed, broadcasting: 3\nI0819 16:08:27.682703 4876 log.go:181] (0x40005d4b00) (0x40001279a0) Stream removed, broadcasting: 5\n" Aug 19 16:08:27.695: INFO: stdout: "" Aug 19 16:08:27.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6436 execpod76lfg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32437' Aug 19 16:08:29.756: INFO: stderr: "I0819 16:08:29.648549 4896 log.go:181] (0x40002ed130) (0x4000664280) Create stream\nI0819 16:08:29.651732 4896 log.go:181] (0x40002ed130) (0x4000664280) Stream added, broadcasting: 1\nI0819 16:08:29.666205 4896 log.go:181] (0x40002ed130) Reply frame received for 1\nI0819 16:08:29.666805 4896 log.go:181] (0x40002ed130) (0x4000962140) Create stream\nI0819 16:08:29.666873 4896 log.go:181] (0x40002ed130) (0x4000962140) Stream added, broadcasting: 3\nI0819 16:08:29.668264 4896 log.go:181] (0x40002ed130) Reply frame received for 3\nI0819 16:08:29.668576 4896 log.go:181] (0x40002ed130) (0x4000963cc0) Create stream\nI0819 16:08:29.668640 4896 log.go:181] (0x40002ed130) (0x4000963cc0) Stream added, broadcasting: 5\nI0819 16:08:29.670252 4896 log.go:181] (0x40002ed130) Reply frame received for 5\nI0819 16:08:29.736281 4896 log.go:181] (0x40002ed130) Data frame received for 3\nI0819 16:08:29.736798 4896 log.go:181] (0x40002ed130) Data frame received for 1\nI0819 16:08:29.736939 4896 log.go:181] (0x4000664280) (1) Data frame handling\nI0819 16:08:29.737012 4896 log.go:181] (0x40002ed130) Data frame received for 5\nI0819 16:08:29.737078 4896 log.go:181] (0x4000963cc0) (5) Data frame handling\nI0819 16:08:29.737236 4896 log.go:181] (0x4000962140) (3) Data frame handling\nI0819 16:08:29.738709 4896 log.go:181] (0x4000664280) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 32437\nConnection to 172.18.0.11 32437 port [tcp/32437] succeeded!\nI0819 16:08:29.739148 4896 log.go:181] (0x4000963cc0) (5) Data frame sent\nI0819 16:08:29.739764 4896 log.go:181] (0x40002ed130) Data frame received for 5\nI0819 16:08:29.739846 4896 log.go:181] (0x4000963cc0) (5) Data frame handling\nI0819 16:08:29.740171 4896 log.go:181] (0x40002ed130) (0x4000664280) Stream removed, broadcasting: 1\nI0819 16:08:29.742909 4896 log.go:181] (0x40002ed130) Go away received\nI0819 16:08:29.745986 4896 log.go:181] (0x40002ed130) (0x4000664280) Stream removed, broadcasting: 1\nI0819 16:08:29.746638 4896 log.go:181] (0x40002ed130) (0x4000962140) Stream removed, broadcasting: 3\nI0819 16:08:29.746857 4896 log.go:181] (0x40002ed130) (0x4000963cc0) Stream removed, broadcasting: 5\n" Aug 19 16:08:29.757: INFO: stdout: "" Aug 19 16:08:29.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6436 execpod76lfg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32437' Aug 19 16:08:31.385: INFO: stderr: "I0819 16:08:31.280887 4917 log.go:181] (0x40005acf20) (0x40005a45a0) Create stream\nI0819 16:08:31.285172 4917 log.go:181] (0x40005acf20) (0x40005a45a0) Stream added, broadcasting: 1\nI0819 16:08:31.300206 4917 log.go:181] (0x40005acf20) Reply frame received for 1\nI0819 16:08:31.300802 4917 log.go:181] (0x40005acf20) (0x4000c2e0a0) Create stream\nI0819 16:08:31.300859 4917 log.go:181] (0x40005acf20) (0x4000c2e0a0) Stream added, broadcasting: 3\nI0819 16:08:31.301977 4917 log.go:181] (0x40005acf20) Reply frame received for 3\nI0819 16:08:31.302196 4917 log.go:181] (0x40005acf20) (0x4000c2e140) Create stream\nI0819 16:08:31.302250 4917 log.go:181] (0x40005acf20) (0x4000c2e140) Stream added, broadcasting: 5\nI0819 16:08:31.303203 4917 log.go:181] (0x40005acf20) Reply frame received for 5\nI0819 16:08:31.364999 4917 log.go:181] (0x40005acf20) Data frame received for 3\nI0819 16:08:31.365668 4917 log.go:181] (0x40005acf20) Data frame received for 1\nI0819 16:08:31.365847 4917 log.go:181] (0x40005a45a0) (1) Data frame handling\nI0819 16:08:31.365938 4917 log.go:181] (0x4000c2e0a0) (3) Data frame handling\nI0819 16:08:31.366121 4917 log.go:181] (0x40005acf20) Data frame received for 5\nI0819 16:08:31.366274 4917 log.go:181] (0x4000c2e140) (5) Data frame handling\nI0819 16:08:31.366879 4917 log.go:181] (0x40005a45a0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 32437\nConnection to 172.18.0.14 32437 port [tcp/32437] succeeded!\nI0819 16:08:31.369673 4917 log.go:181] (0x40005acf20) (0x40005a45a0) Stream removed, broadcasting: 1\nI0819 16:08:31.369915 4917 log.go:181] (0x4000c2e140) (5) Data frame sent\nI0819 16:08:31.370049 4917 log.go:181] (0x40005acf20) Data frame received for 5\nI0819 16:08:31.370983 4917 log.go:181] (0x4000c2e140) (5) Data frame handling\nI0819 16:08:31.371797 4917 log.go:181] (0x40005acf20) Go away received\nI0819 16:08:31.373853 4917 log.go:181] (0x40005acf20) (0x40005a45a0) Stream removed, broadcasting: 1\nI0819 16:08:31.374324 4917 log.go:181] (0x40005acf20) (0x4000c2e0a0) Stream removed, broadcasting: 3\nI0819 16:08:31.374464 4917 log.go:181] (0x40005acf20) (0x4000c2e140) Stream removed, broadcasting: 5\n" Aug 19 16:08:31.386: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:08:31.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6436" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.601 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":288,"skipped":4615,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:08:31.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 19 16:08:32.539: INFO: Waiting up to 5m0s for pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f" in namespace "emptydir-9687" to be "Succeeded or Failed" Aug 19 16:08:32.682: INFO: Pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 142.454807ms Aug 19 16:08:35.136: INFO: Pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.596833401s Aug 19 16:08:37.263: INFO: Pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.72367376s Aug 19 16:08:39.271: INFO: Pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.730927371s STEP: Saw pod success Aug 19 16:08:39.271: INFO: Pod "pod-44f3816e-cb62-40ac-aa82-b712762acf5f" satisfied condition "Succeeded or Failed" Aug 19 16:08:39.275: INFO: Trying to get logs from node latest-worker2 pod pod-44f3816e-cb62-40ac-aa82-b712762acf5f container test-container: STEP: delete the pod Aug 19 16:08:39.435: INFO: Waiting for pod pod-44f3816e-cb62-40ac-aa82-b712762acf5f to disappear Aug 19 16:08:39.512: INFO: Pod pod-44f3816e-cb62-40ac-aa82-b712762acf5f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:08:39.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9687" for this suite. • [SLOW TEST:8.128 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:08:39.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 16:08:39.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d" in namespace "downward-api-7576" to be "Succeeded or Failed" Aug 19 16:08:39.661: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.583198ms Aug 19 16:08:41.909: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275797554s Aug 19 16:08:43.915: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280878447s Aug 19 16:08:46.010: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37654578s Aug 19 16:08:48.240: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Running", Reason="", readiness=true. Elapsed: 8.606348183s Aug 19 16:08:50.495: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.861705368s STEP: Saw pod success Aug 19 16:08:50.496: INFO: Pod "downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d" satisfied condition "Succeeded or Failed" Aug 19 16:08:50.501: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d container client-container: STEP: delete the pod Aug 19 16:08:51.251: INFO: Waiting for pod downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d to disappear Aug 19 16:08:51.264: INFO: Pod downwardapi-volume-43215e33-3f87-40e4-af41-c85355221d6d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:08:51.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7576" for this suite. • [SLOW TEST:11.976 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4672,"failed":0} SSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:08:51.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:08:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3244" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":291,"skipped":4678,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:08:53.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 16:08:54.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb" in namespace "projected-6698" to be "Succeeded or Failed" Aug 19 16:08:55.006: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 298.333638ms Aug 19 16:08:57.200: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.492017726s Aug 19 16:08:59.412: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.704261716s Aug 19 16:09:01.604: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.896132877s Aug 19 16:09:03.813: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Running", Reason="", readiness=true. Elapsed: 9.105535662s Aug 19 16:09:05.992: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.284120056s STEP: Saw pod success Aug 19 16:09:05.992: INFO: Pod "downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb" satisfied condition "Succeeded or Failed" Aug 19 16:09:05.997: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb container client-container: STEP: delete the pod Aug 19 16:09:06.052: INFO: Waiting for pod downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb to disappear Aug 19 16:09:06.065: INFO: Pod downwardapi-volume-9a4e5cdd-1afb-4d01-a717-1afa0836e2eb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:09:06.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6698" for this suite. • [SLOW TEST:12.887 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4681,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:09:06.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 19 16:09:06.688: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d" in namespace "downward-api-3044" to be "Succeeded or Failed" Aug 19 16:09:06.702: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.742824ms Aug 19 16:09:08.709: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020320508s Aug 19 16:09:10.715: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027071367s Aug 19 16:09:12.747: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058781651s Aug 19 16:09:14.861: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172652543s STEP: Saw pod success Aug 19 16:09:14.861: INFO: Pod "downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d" satisfied condition "Succeeded or Failed" Aug 19 16:09:14.867: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d container client-container: STEP: delete the pod Aug 19 16:09:15.210: INFO: Waiting for pod downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d to disappear Aug 19 16:09:15.265: INFO: Pod downwardapi-volume-3a796453-e78a-47ff-8034-2747b1bd8b7d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:09:15.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3044" for this suite. • [SLOW TEST:9.201 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4691,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:09:15.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-ss6x STEP: Creating a pod to test atomic-volume-subpath Aug 19 16:09:16.026: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ss6x" in namespace "subpath-7516" to be "Succeeded or Failed" Aug 19 16:09:16.250: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Pending", Reason="", readiness=false. Elapsed: 224.235566ms Aug 19 16:09:18.257: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231462034s Aug 19 16:09:20.273: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247080049s Aug 19 16:09:22.282: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 6.255622012s Aug 19 16:09:24.288: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 8.262241758s Aug 19 16:09:26.296: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 10.269638865s Aug 19 16:09:28.304: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 12.277567353s Aug 19 16:09:30.311: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 14.284896452s Aug 19 16:09:32.319: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 16.292559875s Aug 19 16:09:34.326: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 18.300017222s Aug 19 16:09:36.335: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.309424506s Aug 19 16:09:38.342: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 22.316121048s Aug 19 16:09:40.348: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Running", Reason="", readiness=true. Elapsed: 24.322081845s Aug 19 16:09:42.356: INFO: Pod "pod-subpath-test-configmap-ss6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.329760902s STEP: Saw pod success Aug 19 16:09:42.356: INFO: Pod "pod-subpath-test-configmap-ss6x" satisfied condition "Succeeded or Failed" Aug 19 16:09:42.362: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-ss6x container test-container-subpath-configmap-ss6x: STEP: delete the pod Aug 19 16:09:42.404: INFO: Waiting for pod pod-subpath-test-configmap-ss6x to disappear Aug 19 16:09:42.415: INFO: Pod pod-subpath-test-configmap-ss6x no longer exists STEP: Deleting pod pod-subpath-test-configmap-ss6x Aug 19 16:09:42.415: INFO: Deleting pod "pod-subpath-test-configmap-ss6x" in namespace "subpath-7516" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:09:42.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7516" for this suite. • [SLOW TEST:27.169 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":294,"skipped":4705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:09:42.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-3c8565ad-1974-4989-bc75-6bc4535b4061 STEP: Creating a pod to test consume secrets Aug 19 16:09:42.525: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d" in namespace "projected-3141" to be "Succeeded or Failed" Aug 19 16:09:42.573: INFO: Pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.53514ms Aug 19 16:09:44.580: INFO: Pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05484939s Aug 19 16:09:46.588: INFO: Pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062609178s Aug 19 16:09:48.596: INFO: Pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070431735s STEP: Saw pod success Aug 19 16:09:48.596: INFO: Pod "pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d" satisfied condition "Succeeded or Failed" Aug 19 16:09:48.645: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d container projected-secret-volume-test: STEP: delete the pod Aug 19 16:09:48.675: INFO: Waiting for pod pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d to disappear Aug 19 16:09:48.679: INFO: Pod pod-projected-secrets-8d7bc74b-8208-418f-8045-e6abd3afdf6d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:09:48.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3141" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:09:48.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 16:09:51.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 16:09:53.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 16:09:55.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450191, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 16:09:58.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 16:09:58.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4783-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:09:59.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2562" for this suite. STEP: Destroying namespace "webhook-2562-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.248 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":296,"skipped":4767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:09:59.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 19 16:10:10.126: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:10.168: INFO: Pod pod-with-poststart-http-hook still exists Aug 19 16:10:12.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:12.177: INFO: Pod pod-with-poststart-http-hook still exists Aug 19 16:10:14.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:14.176: INFO: Pod pod-with-poststart-http-hook still exists Aug 19 16:10:16.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:16.177: INFO: Pod pod-with-poststart-http-hook still exists Aug 19 16:10:18.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:18.177: INFO: Pod pod-with-poststart-http-hook still exists Aug 19 16:10:20.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 19 16:10:20.175: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:20.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4441" for this suite. • [SLOW TEST:20.242 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":297,"skipped":4824,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:20.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-acc48329-992d-42d2-a830-bb9e0853327e STEP: Creating a pod to test consume configMaps Aug 19 16:10:20.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b" in namespace "projected-3121" to be "Succeeded or Failed" Aug 19 16:10:20.327: INFO: Pod "pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.119246ms Aug 19 16:10:22.333: INFO: Pod "pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015277494s Aug 19 16:10:24.340: INFO: Pod "pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022039999s STEP: Saw pod success Aug 19 16:10:24.340: INFO: Pod "pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b" satisfied condition "Succeeded or Failed" Aug 19 16:10:24.520: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b container projected-configmap-volume-test: STEP: delete the pod Aug 19 16:10:24.549: INFO: Waiting for pod pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b to disappear Aug 19 16:10:24.559: INFO: Pod pod-projected-configmaps-cbba9df8-5828-4aa0-b4da-542522292a6b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:24.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3121" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":298,"skipped":4837,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:24.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:28.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5288" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":299,"skipped":4854,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:28.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-911ba21c-c8ce-44a4-bc96-424462774474 STEP: Creating a pod to test consume configMaps Aug 19 16:10:29.082: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21" in namespace "projected-8212" to be "Succeeded or Failed" Aug 19 16:10:29.203: INFO: Pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21": Phase="Pending", Reason="", readiness=false. Elapsed: 120.923147ms Aug 19 16:10:31.211: INFO: Pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128947625s Aug 19 16:10:33.220: INFO: Pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21": Phase="Running", Reason="", readiness=true. Elapsed: 4.137406186s Aug 19 16:10:35.226: INFO: Pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14376951s STEP: Saw pod success Aug 19 16:10:35.226: INFO: Pod "pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21" satisfied condition "Succeeded or Failed" Aug 19 16:10:35.230: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21 container projected-configmap-volume-test: STEP: delete the pod Aug 19 16:10:35.444: INFO: Waiting for pod pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21 to disappear Aug 19 16:10:35.562: INFO: Pod pod-projected-configmaps-2a7038dd-226a-4783-9c73-517230b3fb21 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:35.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8212" for this suite. • [SLOW TEST:6.800 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4858,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:35.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 19 16:10:37.816: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 19 16:10:39.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 19 16:10:41.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733450237, loc:(*time.Location)(0x6e4f160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 19 16:10:44.977: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 19 16:10:44.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:46.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6251" for this suite. STEP: Destroying namespace "webhook-6251-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.791 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":301,"skipped":4876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:46.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 19 16:10:52.251: INFO: Successfully updated pod "labelsupdate2be23134-4cbe-4d50-819b-15ee9d3d8652" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:10:56.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5407" for this suite. • [SLOW TEST:9.834 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":302,"skipped":4907,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 19 16:10:56.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9580 STEP: creating service affinity-clusterip-transition in namespace services-9580 STEP: creating replication controller affinity-clusterip-transition in namespace services-9580 I0819 16:10:56.496650 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9580, replica count: 3 I0819 16:10:59.547904 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0819 16:11:02.548842 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 19 16:11:02.558: INFO: Creating new exec pod Aug 19 16:11:07.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9580 execpod-affinity2pfn2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 19 16:11:09.346: INFO: stderr: "I0819 16:11:09.159806 4937 log.go:181] (0x40008b26e0) (0x4000c8e460) Create stream\nI0819 16:11:09.166296 4937 log.go:181] (0x40008b26e0) (0x4000c8e460) Stream added, broadcasting: 1\nI0819 16:11:09.176239 4937 log.go:181] (0x40008b26e0) Reply frame received for 1\nI0819 16:11:09.176776 4937 log.go:181] (0x40008b26e0) (0x4000c8e500) Create stream\nI0819 16:11:09.176838 4937 log.go:181] (0x40008b26e0) (0x4000c8e500) Stream added, broadcasting: 3\nI0819 16:11:09.178267 4937 log.go:181] (0x40008b26e0) Reply frame received for 3\nI0819 16:11:09.178649 4937 log.go:181] (0x40008b26e0) (0x4000a60000) Create stream\nI0819 16:11:09.178734 4937 log.go:181] (0x40008b26e0) (0x4000a60000) Stream added, broadcasting: 5\nI0819 16:11:09.180245 4937 log.go:181] (0x40008b26e0) Reply frame received for 5\nI0819 16:11:09.265647 4937 log.go:181] (0x40008b26e0) Data frame received for 5\nI0819 16:11:09.266081 4937 log.go:181] (0x4000a60000) (5) Data frame handling\nI0819 16:11:09.267090 4937 log.go:181] (0x4000a60000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0819 16:11:09.321222 4937 log.go:181] (0x40008b26e0) Data frame received for 5\nI0819 16:11:09.321527 4937 log.go:181] (0x4000a60000) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0819 16:11:09.321764 4937 log.go:181] (0x40008b26e0) Data frame received for 3\nI0819 16:11:09.322033 4937 log.go:181] (0x4000c8e500) (3) Data frame handling\nI0819 16:11:09.322320 4937 log.go:181] (0x4000a60000) (5) Data frame sent\nI0819 16:11:09.322509 4937 log.go:181] (0x40008b26e0) Data frame received for 5\nI0819 16:11:09.322630 4937 log.go:181] (0x4000a60000) (5) Data frame handling\nI0819 16:11:09.323226 4937 log.go:181] (0x40008b26e0) Data frame received for 1\nI0819 16:11:09.323369 4937 log.go:181] (0x4000c8e460) (1) Data frame handling\nI0819 16:11:09.323518 4937 log.go:181] (0x4000c8e460) (1) Data frame sent\nI0819 16:11:09.325473 4937 log.go:181] (0x40008b26e0) (0x4000c8e460) Stream removed, broadcasting: 1\nI0819 16:11:09.327710 4937 log.go:181] (0x40008b26e0) Go away received\nI0819 16:11:09.330831 4937 log.go:181] (0x40008b26e0) (0x4000c8e460) Stream removed, broadcasting: 1\nI0819 16:11:09.331487 4937 log.go:181] (0x40008b26e0) (0x4000c8e500) Stream removed, broadcasting: 3\nI0819 16:11:09.331981 4937 log.go:181] (0x40008b26e0) (0x4000a60000) Stream removed, broadcasting: 5\n" Aug 19 16:11:09.346: INFO: stdout: "" Aug 19 16:11:09.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9580 execpod-affinity2pfn2 -- /bin/sh -x -c nc -zv -t -w 2 10.107.219.235 80' Aug 19 16:11:10.985: INFO: stderr: "I0819 16:11:10.878713 4957 log.go:181] (0x40008e0160) (0x4000e88aa0) Create stream\nI0819 16:11:10.881798 4957 log.go:181] (0x40008e0160) (0x4000e88aa0) Stream added, broadcasting: 1\nI0819 16:11:10.894037 4957 log.go:181] (0x40008e0160) Reply frame received for 1\nI0819 16:11:10.894823 4957 log.go:181] (0x40008e0160) (0x4000d92000) Create stream\nI0819 16:11:10.894900 4957 log.go:181] (0x40008e0160) (0x4000d92000) Stream added, broadcasting: 3\nI0819 16:11:10.896607 4957 log.go:181] (0x40008e0160) Reply frame received for 3\nI0819 16:11:10.896975 4957 log.go:181] (0x40008e0160) (0x40007252c0) Create stream\nI0819 16:11:10.897056 4957 log.go:181] (0x40008e0160) (0x40007252c0) Stream added, broadcasting: 5\nI0819 16:11:10.898440 4957 log.go:181] (0x40008e0160) Reply frame received for 5\nI0819 16:11:10.962783 4957 log.go:181] (0x40008e0160) Data frame received for 5\nI0819 16:11:10.963498 4957 log.go:181] (0x40008e0160) Data frame received for 1\nI0819 16:11:10.963664 4957 log.go:181] (0x4000e88aa0) (1) Data frame handling\nI0819 16:11:10.963780 4957 log.go:181] (0x40008e0160) Data frame received for 3\nI0819 16:11:10.963904 4957 log.go:181] (0x4000d92000) (3) Data frame handling\nI0819 16:11:10.964172 4957 log.go:181] (0x40007252c0) (5) Data frame handling\nI0819 16:11:10.965260 4957 log.go:181] (0x4000e88aa0) (1) Data frame sent\n+ nc -zv -t -w 2 10.107.219.235 80\nConnection to 10.107.219.235 80 port [tcp/http] succeeded!\nI0819 16:11:10.966589 4957 log.go:181] (0x40007252c0) (5) Data frame sent\nI0819 16:11:10.966705 4957 log.go:181] (0x40008e0160) Data frame received for 5\nI0819 16:11:10.967010 4957 log.go:181] (0x40008e0160) (0x4000e88aa0) Stream removed, broadcasting: 1\nI0819 16:11:10.969274 4957 log.go:181] (0x40007252c0) (5) Data frame handling\nI0819 16:11:10.969932 4957 log.go:181] (0x40008e0160) Go away received\nI0819 16:11:10.973494 4957 log.go:181] (0x40008e0160) (0x4000e88aa0) Stream removed, broadcasting: 1\nI0819 16:11:10.973774 4957 log.go:181] (0x40008e0160) (0x4000d92000) Stream removed, broadcasting: 3\nI0819 16:11:10.973972 4957 log.go:181] (0x40008e0160) (0x40007252c0) Stream removed, broadcasting: 5\n" Aug 19 16:11:10.986: INFO: stdout: "" Aug 19 16:11:10.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9580 execpod-affinity2pfn2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.219.235:80/ ; done' Aug 19 16:11:12.740: INFO: stderr: "I0819 16:11:12.523993 4977 log.go:181] (0x400053a2c0) (0x4000562500) Create stream\nI0819 16:11:12.530674 4977 log.go:181] (0x400053a2c0) (0x4000562500) Stream added, broadcasting: 1\nI0819 16:11:12.543306 4977 log.go:181] (0x400053a2c0) Reply frame received for 1\nI0819 16:11:12.543914 4977 log.go:181] (0x400053a2c0) (0x4000a0c500) Create stream\nI0819 16:11:12.543997 4977 log.go:181] (0x400053a2c0) (0x4000a0c500) Stream added, broadcasting: 3\nI0819 16:11:12.545755 4977 log.go:181] (0x400053a2c0) Reply frame received for 3\nI0819 16:11:12.546169 4977 log.go:181] (0x400053a2c0) (0x40005625a0) Create stream\nI0819 16:11:12.546255 4977 log.go:181] (0x400053a2c0) (0x40005625a0) Stream added, broadcasting: 5\nI0819 16:11:12.548055 4977 log.go:181] (0x400053a2c0) Reply frame received for 5\nI0819 16:11:12.609376 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.609818 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.609987 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.610169 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.610766 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.611058 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.614970 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.615041 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.615115 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.615949 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.616012 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.616068 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.616228 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.616357 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.616563 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.624049 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.624159 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.624248 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.625014 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.625149 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.625271 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.625369 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.625467 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.625574 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.631408 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.631519 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.631639 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.632183 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.632277 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.632356 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.632443 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.632509 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.632621 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.638427 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.638550 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.638681 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.638945 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.639013 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.639071 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.639124 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.639173 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.639228 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.643569 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.643656 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.643748 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.644613 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.644803 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.644912 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.645052 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.645156 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.645260 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.649633 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.649715 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.649798 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.650563 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.650648 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.650727 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.650820 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.650887 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.650979 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.655928 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.656022 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.656125 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.656527 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.656655 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.656829 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.656942 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.657010 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.657089 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.660556 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.660706 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.660971 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.661095 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.661183 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.661251 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.661353 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.661427 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.661523 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.667925 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.668035 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.668129 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.668953 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.669157 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.669337 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.669469 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.669578 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.669706 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.674896 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.674989 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.675101 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.675556 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.675675 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.675791 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.675914 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.676048 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.676167 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.682547 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.682636 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.682724 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.683542 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.683651 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.683797 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.683958 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.684039 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.684143 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.690570 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.690694 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.690776 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.690851 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.690919 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.690997 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.691058 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.691126 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.691200 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.697197 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.697344 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.697487 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.697629 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.697734 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.697833 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.697929 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.698065 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.698202 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.701771 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.701967 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.702109 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.702222 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.702325 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.702450 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.702569 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.702659 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.702765 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.707409 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.707561 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.707737 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.708010 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.708164 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.708323 4977 log.go:181] (0x40005625a0) (5) Data frame sent\n+ echo\n+ curlI0819 16:11:12.708441 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.708569 4977 log.go:181] (0x40005625a0) (5) Data frame handling\n -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:12.708667 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.708936 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.709112 4977 log.go:181] (0x40005625a0) (5) Data frame sent\nI0819 16:11:12.709287 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.716169 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.716275 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.716441 4977 log.go:181] (0x4000a0c500) (3) Data frame sent\nI0819 16:11:12.716581 4977 log.go:181] (0x400053a2c0) Data frame received for 5\nI0819 16:11:12.716830 4977 log.go:181] (0x40005625a0) (5) Data frame handling\nI0819 16:11:12.717056 4977 log.go:181] (0x400053a2c0) Data frame received for 3\nI0819 16:11:12.717207 4977 log.go:181] (0x4000a0c500) (3) Data frame handling\nI0819 16:11:12.719064 4977 log.go:181] (0x400053a2c0) Data frame received for 1\nI0819 16:11:12.719197 4977 log.go:181] (0x4000562500) (1) Data frame handling\nI0819 16:11:12.719312 4977 log.go:181] (0x4000562500) (1) Data frame sent\nI0819 16:11:12.720599 4977 log.go:181] (0x400053a2c0) (0x4000562500) Stream removed, broadcasting: 1\nI0819 16:11:12.723747 4977 log.go:181] (0x400053a2c0) Go away received\nI0819 16:11:12.729375 4977 log.go:181] (0x400053a2c0) (0x4000562500) Stream removed, broadcasting: 1\nI0819 16:11:12.729736 4977 log.go:181] (0x400053a2c0) (0x4000a0c500) Stream removed, broadcasting: 3\nI0819 16:11:12.729932 4977 log.go:181] (0x400053a2c0) (0x40005625a0) Stream removed, broadcasting: 5\n" Aug 19 16:11:12.746: INFO: stdout: "\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-7wq5t\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-rbvdz\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg" Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-7wq5t Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-rbvdz Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.747: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:12.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-9580 execpod-affinity2pfn2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.219.235:80/ ; done' Aug 19 16:11:14.426: INFO: stderr: "I0819 16:11:14.205638 4997 log.go:181] (0x40005c6160) (0x4000909d60) Create stream\nI0819 16:11:14.208293 4997 log.go:181] (0x40005c6160) (0x4000909d60) Stream added, broadcasting: 1\nI0819 16:11:14.219710 4997 log.go:181] (0x40005c6160) Reply frame received for 1\nI0819 16:11:14.220606 4997 log.go:181] (0x40005c6160) (0x400013edc0) Create stream\nI0819 16:11:14.220676 4997 log.go:181] (0x40005c6160) (0x400013edc0) Stream added, broadcasting: 3\nI0819 16:11:14.222303 4997 log.go:181] (0x40005c6160) Reply frame received for 3\nI0819 16:11:14.222812 4997 log.go:181] (0x40005c6160) (0x400013f9a0) Create stream\nI0819 16:11:14.222938 4997 log.go:181] (0x40005c6160) (0x400013f9a0) Stream added, broadcasting: 5\nI0819 16:11:14.224626 4997 log.go:181] (0x40005c6160) Reply frame received for 5\nI0819 16:11:14.303107 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.303534 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.303689 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.304019 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.304872 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.304977 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.306557 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.306659 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.306784 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.307079 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.307157 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.307225 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.307287 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.307386 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.307458 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.311703 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.311789 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.311900 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.312486 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.312569 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.312636 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.312699 4997 log.go:181] (0x40005c6160) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.312831 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.313139 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.316138 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.316222 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.316309 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.316649 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.316881 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.317026 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.317158 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.317231 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.317319 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.322757 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.322864 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.322978 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.323586 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.323683 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.323797 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.323965 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.324048 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.324138 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.330335 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.330441 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.330575 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.331158 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.331241 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.331362 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.331517 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.331662 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.331819 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.337553 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.337650 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.337741 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.338409 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.338510 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.338620 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.338746 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.338832 4997 log.go:181] (0x400013edc0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.338900 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.343751 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.343823 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.343911 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.344700 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.344889 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.344970 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.345037 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.345098 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.345163 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.350096 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.350217 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.350327 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.351254 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.351424 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.351554 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.351713 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.351896 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.352065 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.356500 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.356613 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.356799 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.357044 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.357169 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.357251 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.357336 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.357404 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.357468 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.363112 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.363202 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.363294 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.363717 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.363838 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.363962 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.364128 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.364346 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.364477 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.370175 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.370283 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.370405 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.370818 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.370933 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.371048 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.371127 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.371203 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.371305 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.375532 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.375593 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.375660 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.376492 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.376557 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.376644 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.376839 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.376940 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.377029 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.383791 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.383900 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.384012 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.384569 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.384691 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.384936 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0819 16:11:14.385107 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.385211 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.385334 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n http://10.107.219.235:80/\nI0819 16:11:14.385459 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.385583 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.385716 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.389340 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.389475 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.389608 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.389953 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.390060 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.390148 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.390227 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.390302 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.390395 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.394164 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.394252 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.394345 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.395249 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.395409 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.219.235:80/\nI0819 16:11:14.395599 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.395777 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.395907 4997 log.go:181] (0x400013f9a0) (5) Data frame sent\nI0819 16:11:14.396023 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.402875 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.402989 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.403124 4997 log.go:181] (0x400013edc0) (3) Data frame sent\nI0819 16:11:14.403837 4997 log.go:181] (0x40005c6160) Data frame received for 3\nI0819 16:11:14.403908 4997 log.go:181] (0x400013edc0) (3) Data frame handling\nI0819 16:11:14.404011 4997 log.go:181] (0x40005c6160) Data frame received for 5\nI0819 16:11:14.404112 4997 log.go:181] (0x400013f9a0) (5) Data frame handling\nI0819 16:11:14.405993 4997 log.go:181] (0x40005c6160) Data frame received for 1\nI0819 16:11:14.406065 4997 log.go:181] (0x4000909d60) (1) Data frame handling\nI0819 16:11:14.406138 4997 log.go:181] (0x4000909d60) (1) Data frame sent\nI0819 16:11:14.406785 4997 log.go:181] (0x40005c6160) (0x4000909d60) Stream removed, broadcasting: 1\nI0819 16:11:14.409682 4997 log.go:181] (0x40005c6160) Go away received\nI0819 16:11:14.413839 4997 log.go:181] (0x40005c6160) (0x4000909d60) Stream removed, broadcasting: 1\nI0819 16:11:14.414251 4997 log.go:181] (0x40005c6160) (0x400013edc0) Stream removed, broadcasting: 3\nI0819 16:11:14.414687 4997 log.go:181] (0x40005c6160) (0x400013f9a0) Stream removed, broadcasting: 5\n" Aug 19 16:11:14.432: INFO: stdout: "\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg\naffinity-clusterip-transition-k2nfg" Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Received response from host: affinity-clusterip-transition-k2nfg Aug 19 16:11:14.433: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9580, will wait for the garbage collector to delete the pods Aug 19 16:11:14.567: INFO: Deleting ReplicationController affinity-clusterip-transition took: 8.10444ms Aug 19 16:11:15.368: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 800.634394ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 19 16:11:30.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9580" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:33.851 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":303,"skipped":4926,"failed":0} SSSSSSSSAug 19 16:11:30.181: INFO: Running AfterSuite actions on all nodes Aug 19 16:11:30.182: INFO: Running AfterSuite actions on node 1 Aug 19 16:11:30.182: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4934,"failed":0} Ran 303 of 5237 Specs in 8705.775 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4934 Skipped PASS