I1112 09:52:11.731573 10 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1112 09:52:11.731805 10 e2e.go:109] Starting e2e run "4defbc17-803f-431b-b2bd-bd97a166bd5f" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1605174730 - Will randomize all specs Will run 278 of 4845 specs Nov 12 09:52:11.801: INFO: >>> kubeConfig: /root/.kube/config Nov 12 09:52:11.803: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 12 09:52:11.823: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 12 09:52:11.865: INFO: 37 / 37 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 12 09:52:11.865: INFO: expected 6 pod replicas in namespace 'kube-system', 6 are Running and Ready. Nov 12 09:52:11.865: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 12 09:52:11.876: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 12 09:52:11.876: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 12 09:52:11.876: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 12 09:52:11.876: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'nodelocaldns' (0 seconds elapsed) Nov 12 09:52:11.876: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'registry-proxy' (0 seconds elapsed) Nov 12 09:52:11.876: INFO: e2e test version: v1.17.13 Nov 12 09:52:11.877: INFO: kube-apiserver version: v1.16.7 Nov 12 09:52:11.879: INFO: >>> kubeConfig: /root/.kube/config Nov 12 09:52:11.883: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:52:11.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Nov 12 09:52:11.898: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Nov 12 09:52:11.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2" in namespace "downward-api-9080" to be "success or failure" Nov 12 09:52:11.904: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.462247ms Nov 12 09:52:13.907: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004174787s Nov 12 09:52:15.909: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006881792s Nov 12 09:52:17.912: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009645777s Nov 12 09:52:19.915: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012101926s Nov 12 09:52:21.918: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015229683s Nov 12 09:52:23.920: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01792226s STEP: Saw pod success Nov 12 09:52:23.920: INFO: Pod "downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2" satisfied condition "success or failure" Nov 12 09:52:23.922: INFO: Trying to get logs from node node4 pod downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2 container client-container: STEP: delete the pod Nov 12 09:52:23.940: INFO: Waiting for pod downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2 to disappear Nov 12 09:52:23.942: INFO: Pod downwardapi-volume-b6b6b4b3-874d-4743-929b-906c641b8fa2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:52:23.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9080" for this suite. • [SLOW TEST:12.064 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":12,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:52:23.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Nov 12 09:52:23.967: INFO: Waiting up to 5m0s for pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d" in namespace "var-expansion-2522" to be "success or failure" Nov 12 09:52:23.969: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.790347ms Nov 12 09:52:25.978: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011585086s Nov 12 09:52:27.981: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014082203s Nov 12 09:52:29.984: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017128101s Nov 12 09:52:31.986: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019498392s Nov 12 09:52:33.989: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022024335s Nov 12 09:52:35.991: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024557443s Nov 12 09:52:37.994: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.027170025s STEP: Saw pod success Nov 12 09:52:37.994: INFO: Pod "var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d" satisfied condition "success or failure" Nov 12 09:52:37.996: INFO: Trying to get logs from node node4 pod var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d container dapi-container: STEP: delete the pod Nov 12 09:52:38.008: INFO: Waiting for pod var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d to disappear Nov 12 09:52:38.010: INFO: Pod var-expansion-765baadf-27ce-42f5-9c85-e53be8bdc93d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:52:38.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2522" for this suite. • [SLOW TEST:14.068 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:52:38.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-e571736c-958e-4e9e-b630-949309f6ab53 STEP: Creating a pod to test consume secrets Nov 12 09:52:38.051: INFO: Waiting up to 5m0s for pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769" in namespace "secrets-3748" to be "success or failure" Nov 12 09:52:38.052: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 1.624757ms Nov 12 09:52:40.055: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004364969s Nov 12 09:52:42.058: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007628048s Nov 12 09:52:44.061: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010193308s Nov 12 09:52:46.064: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012920534s Nov 12 09:52:48.067: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015881706s Nov 12 09:52:50.070: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01915425s STEP: Saw pod success Nov 12 09:52:50.070: INFO: Pod "pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769" satisfied condition "success or failure" Nov 12 09:52:50.072: INFO: Trying to get logs from node node1 pod pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769 container secret-volume-test: STEP: delete the pod Nov 12 09:52:50.091: INFO: Waiting for pod pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769 to disappear Nov 12 09:52:50.092: INFO: Pod pod-secrets-cfd3db7e-1052-4b95-ab02-e7578c3cf769 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:52:50.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3748" for this suite. STEP: Destroying namespace "secret-namespace-3607" for this suite. • [SLOW TEST:12.085 seconds] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":33,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:52:50.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 09:52:50.117: INFO: Creating deployment "test-recreate-deployment" Nov 12 09:52:50.119: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 12 09:52:50.124: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 12 09:52:52.128: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 12 09:52:52.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:52:54.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:52:56.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:52:58.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771570, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:53:00.133: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 12 09:53:00.137: INFO: Updating deployment test-recreate-deployment Nov 12 09:53:00.137: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Nov 12 09:53:00.159: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3184 /apis/apps/v1/namespaces/deployment-3184/deployments/test-recreate-deployment 060d2141-14f0-4ed7-a2a6-d9bbaa725f10 2985 2 2020-11-12 09:52:50 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bb3cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-12 09:53:00 +0000 UTC,LastTransitionTime:2020-11-12 09:53:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-11-12 09:53:00 +0000 UTC,LastTransitionTime:2020-11-12 09:52:50 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Nov 12 09:53:00.162: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3184 /apis/apps/v1/namespaces/deployment-3184/replicasets/test-recreate-deployment-5f94c574ff 33efc11e-9c40-405d-9155-2000abd23a3b 2982 1 2020-11-12 09:53:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 060d2141-14f0-4ed7-a2a6-d9bbaa725f10 0xc002f50097 0xc002f50098}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f500f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 12 09:53:00.162: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 12 09:53:00.162: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3184 /apis/apps/v1/namespaces/deployment-3184/replicasets/test-recreate-deployment-799c574856 96404a5a-1a40-4e0f-88cf-7921d46426d9 2974 2 2020-11-12 09:52:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 060d2141-14f0-4ed7-a2a6-d9bbaa725f10 0xc002f50157 0xc002f50158}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f501c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 12 09:53:00.164: INFO: Pod "test-recreate-deployment-5f94c574ff-qc69n" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qc69n test-recreate-deployment-5f94c574ff- deployment-3184 /api/v1/namespaces/deployment-3184/pods/test-recreate-deployment-5f94c574ff-qc69n 145d7347-dd2a-44fb-b4ae-15cf1da2f892 2986 0 2020-11-12 09:53:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 33efc11e-9c40-405d-9155-2000abd23a3b 0xc002f50627 0xc002f50628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9blt6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9blt6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9blt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:53:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:53:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:53:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:53:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:,StartTime:2020-11-12 09:53:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:53:00.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3184" for this suite. • [SLOW TEST:10.068 seconds] [sig-apps] Deployment /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":4,"skipped":41,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:53:00.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-5e5dda6b-ec4c-40b9-aba8-061792800db0 STEP: Creating secret with name s-test-opt-upd-478347c3-ad7c-47c5-9d40-735c5c536fda STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5e5dda6b-ec4c-40b9-aba8-061792800db0 STEP: Updating secret s-test-opt-upd-478347c3-ad7c-47c5-9d40-735c5c536fda STEP: Creating secret with name s-test-opt-create-6309c85e-f98d-4e6e-9b6e-b3b13ee26913 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:54:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8642" for this suite. • [SLOW TEST:96.358 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:54:36.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Nov 12 09:54:51.057: INFO: Successfully updated pod "adopt-release-pv7pn" STEP: Checking that the Job readopts the Pod Nov 12 09:54:51.057: INFO: Waiting up to 15m0s for pod "adopt-release-pv7pn" in namespace "job-8897" to be "adopted" Nov 12 09:54:51.059: INFO: Pod "adopt-release-pv7pn": Phase="Running", Reason="", readiness=true. Elapsed: 1.501158ms Nov 12 09:54:53.061: INFO: Pod "adopt-release-pv7pn": Phase="Running", Reason="", readiness=true. Elapsed: 2.004038391s Nov 12 09:54:53.062: INFO: Pod "adopt-release-pv7pn" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Nov 12 09:54:53.567: INFO: Successfully updated pod "adopt-release-pv7pn" STEP: Checking that the Job releases the Pod Nov 12 09:54:53.567: INFO: Waiting up to 15m0s for pod "adopt-release-pv7pn" in namespace "job-8897" to be "released" Nov 12 09:54:53.568: INFO: Pod "adopt-release-pv7pn": Phase="Running", Reason="", readiness=true. Elapsed: 1.637687ms Nov 12 09:54:55.571: INFO: Pod "adopt-release-pv7pn": Phase="Running", Reason="", readiness=true. Elapsed: 2.004112408s Nov 12 09:54:55.571: INFO: Pod "adopt-release-pv7pn" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:54:55.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8897" for this suite. • [SLOW TEST:19.049 seconds] [sig-apps] Job /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":6,"skipped":78,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:54:55.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 12 09:54:55.598: INFO: Waiting up to 5m0s for pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f" in namespace "emptydir-7326" to be "success or failure" Nov 12 09:54:55.600: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.78051ms Nov 12 09:54:57.602: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004400877s Nov 12 09:54:59.605: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007125547s Nov 12 09:55:01.607: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009575967s Nov 12 09:55:03.610: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012236157s Nov 12 09:55:05.613: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015029544s Nov 12 09:55:07.616: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017884877s STEP: Saw pod success Nov 12 09:55:07.616: INFO: Pod "pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f" satisfied condition "success or failure" Nov 12 09:55:07.618: INFO: Trying to get logs from node node3 pod pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f container test-container: STEP: delete the pod Nov 12 09:55:07.636: INFO: Waiting for pod pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f to disappear Nov 12 09:55:07.638: INFO: Pod pod-4f595d66-f5ec-4273-bc50-b0cc6b19958f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:55:07.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7326" for this suite. • [SLOW TEST:12.065 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":79,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:55:07.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 12 09:55:07.659: INFO: Waiting up to 5m0s for pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a" in namespace "emptydir-9488" to be "success or failure" Nov 12 09:55:07.661: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.516043ms Nov 12 09:55:09.664: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004212056s Nov 12 09:55:11.666: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00694438s Nov 12 09:55:13.669: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009418436s Nov 12 09:55:15.671: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011836975s Nov 12 09:55:17.674: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014218058s Nov 12 09:55:19.676: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.016738029s STEP: Saw pod success Nov 12 09:55:19.676: INFO: Pod "pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a" satisfied condition "success or failure" Nov 12 09:55:19.678: INFO: Trying to get logs from node node3 pod pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a container test-container: STEP: delete the pod Nov 12 09:55:19.688: INFO: Waiting for pod pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a to disappear Nov 12 09:55:19.690: INFO: Pod pod-391fcc5a-2d9e-457e-ab4c-3a915ac3364a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:55:19.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9488" for this suite. • [SLOW TEST:12.053 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":85,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:55:19.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-920f3429-d296-4ad4-90aa-1ae777ef4e1c STEP: Creating a pod to test consume configMaps Nov 12 09:55:19.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda" in namespace "projected-1236" to be "success or failure" Nov 12 09:55:19.719: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 1.632341ms Nov 12 09:55:21.722: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004250563s Nov 12 09:55:23.724: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006893951s Nov 12 09:55:25.727: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009511013s Nov 12 09:55:27.730: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012488928s Nov 12 09:55:29.732: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014734188s Nov 12 09:55:31.735: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Pending", Reason="", readiness=false. Elapsed: 12.017108397s Nov 12 09:55:33.737: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.019315553s STEP: Saw pod success Nov 12 09:55:33.737: INFO: Pod "pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda" satisfied condition "success or failure" Nov 12 09:55:33.739: INFO: Trying to get logs from node node3 pod pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda container projected-configmap-volume-test: STEP: delete the pod Nov 12 09:55:33.748: INFO: Waiting for pod pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda to disappear Nov 12 09:55:33.750: INFO: Pod pod-projected-configmaps-fc4ddaf7-2562-4a59-8669-52d72840ddda no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:55:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1236" for this suite. • [SLOW TEST:14.059 seconds] [sig-storage] Projected configMap /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":99,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:55:33.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-488d466f-07ce-4e7d-a509-4269ff2a5ed7 STEP: Creating secret with name s-test-opt-upd-d5b4e94f-3164-40af-ad28-525317207594 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-488d466f-07ce-4e7d-a509-4269ff2a5ed7 STEP: Updating secret s-test-opt-upd-d5b4e94f-3164-40af-ad28-525317207594 STEP: Creating secret with name s-test-opt-create-853a02ec-c3dc-478f-bcb1-1c07d8ebe70a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:56:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8287" for this suite. • [SLOW TEST:82.305 seconds] [sig-storage] Secrets /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":107,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:56:56.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Nov 12 09:56:56.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4214' Nov 12 09:56:56.395: INFO: stderr: "" Nov 12 09:56:56.395: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Nov 12 09:56:57.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:56:57.398: INFO: Found 0 / 1 Nov 12 09:56:58.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:56:58.398: INFO: Found 0 / 1 Nov 12 09:56:59.399: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:56:59.399: INFO: Found 0 / 1 Nov 12 09:57:00.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:00.398: INFO: Found 0 / 1 Nov 12 09:57:01.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:01.398: INFO: Found 0 / 1 Nov 12 09:57:02.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:02.398: INFO: Found 0 / 1 Nov 12 09:57:03.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:03.398: INFO: Found 0 / 1 Nov 12 09:57:04.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:04.398: INFO: Found 0 / 1 Nov 12 09:57:05.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:05.398: INFO: Found 0 / 1 Nov 12 09:57:06.398: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:06.398: INFO: Found 1 / 1 Nov 12 09:57:06.398: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 12 09:57:06.400: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:06.400: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 12 09:57:06.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-zh5lr --namespace=kubectl-4214 -p {"metadata":{"annotations":{"x":"y"}}}' Nov 12 09:57:06.528: INFO: stderr: "" Nov 12 09:57:06.528: INFO: stdout: "pod/agnhost-master-zh5lr patched\n" STEP: checking annotations Nov 12 09:57:06.534: INFO: Selector matched 1 pods for map[app:agnhost] Nov 12 09:57:06.534: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:57:06.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4214" for this suite. • [SLOW TEST:10.485 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1433 should add annotations for pods in rc [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":11,"skipped":109,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:57:06.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 12 09:57:20.572: INFO: &Pod{ObjectMeta:{send-events-cf74f991-657e-44e9-859c-7c65af1b3e93 events-4478 /api/v1/namespaces/events-4478/pods/send-events-cf74f991-657e-44e9-859c-7c65af1b3e93 e5b7ce33-bd26-49f7-9f12-6d8ed2a3229f 4232 0 2020-11-12 09:57:06 +0000 UTC map[name:foo time:559918484] map[k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.10" ], "mac": "0a:58:0a:f4:04:0a", "default": true, "dns": {} }]] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5mz7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5mz7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5mz7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:57:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:57:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:57:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 09:57:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.16,PodIP:10.244.4.10,StartTime:2020-11-12 09:57:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 09:57:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://22fbe1208c6ec757e37dd287c4e94cdd2a4eb2af9a663e093a9994594e23d39f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Nov 12 09:57:22.575: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 12 09:57:24.578: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:57:24.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4478" for this suite. • [SLOW TEST:18.042 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":12,"skipped":121,"failed":0} [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:57:24.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-7681122f-8887-47a9-8738-885eadd3f2e9 STEP: Creating a pod to test consume secrets Nov 12 09:57:24.610: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0" in namespace "projected-3758" to be "success or failure" Nov 12 09:57:24.611: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.602149ms Nov 12 09:57:26.614: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004651287s Nov 12 09:57:28.618: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007822072s Nov 12 09:57:30.620: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010466619s Nov 12 09:57:32.623: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013299782s Nov 12 09:57:34.626: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016205253s STEP: Saw pod success Nov 12 09:57:34.626: INFO: Pod "pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0" satisfied condition "success or failure" Nov 12 09:57:34.628: INFO: Trying to get logs from node node3 pod pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0 container secret-volume-test: STEP: delete the pod Nov 12 09:57:34.646: INFO: Waiting for pod pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0 to disappear Nov 12 09:57:34.647: INFO: Pod pod-projected-secrets-b6020b1e-f334-4770-8465-48a8bd06c8c0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:57:34.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3758" for this suite. • [SLOW TEST:10.065 seconds] [sig-storage] Projected secret /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":121,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:57:34.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:57:34.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7199" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:57:34.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:57:50.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8533" for this suite. • [SLOW TEST:16.040 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":15,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:57:50.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Nov 12 09:57:50.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1862 -- logs-generator --log-lines-total 100 --run-duration 20s' Nov 12 09:57:50.896: INFO: stderr: "" Nov 12 09:57:50.896: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Nov 12 09:57:50.896: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Nov 12 09:57:50.896: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1862" to be "running and ready, or succeeded" Nov 12 09:57:50.898: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 1.414219ms Nov 12 09:57:52.900: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003893777s Nov 12 09:57:54.903: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006276975s Nov 12 09:57:56.906: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009420725s Nov 12 09:57:58.908: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011927831s Nov 12 09:58:00.911: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 10.014174024s Nov 12 09:58:00.911: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Nov 12 09:58:00.911: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Nov 12 09:58:00.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862' Nov 12 09:58:01.028: INFO: stderr: "" Nov 12 09:58:01.028: INFO: stdout: "I1112 09:57:59.304959 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/ljlc 312\nI1112 09:57:59.505174 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/r6z 207\nI1112 09:57:59.705131 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/c69l 253\nI1112 09:57:59.905065 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/65m 420\nI1112 09:58:00.105012 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2nt 427\nI1112 09:58:00.305156 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/jncv 453\nI1112 09:58:00.505091 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/m72 545\nI1112 09:58:00.705222 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/jsf 381\nI1112 09:58:00.905133 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/bnq8 454\n" STEP: limiting log lines Nov 12 09:58:01.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862 --tail=1' Nov 12 09:58:01.155: INFO: stderr: "" Nov 12 09:58:01.155: INFO: stdout: "I1112 09:58:01.105143 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/q5w 394\n" Nov 12 09:58:01.155: INFO: got output "I1112 09:58:01.105143 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/q5w 394\n" STEP: limiting log bytes Nov 12 09:58:01.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862 --limit-bytes=1' Nov 12 09:58:01.278: INFO: stderr: "" Nov 12 09:58:01.278: INFO: stdout: "I" Nov 12 09:58:01.278: INFO: got output "I" STEP: exposing timestamps Nov 12 09:58:01.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862 --tail=1 --timestamps' Nov 12 09:58:01.408: INFO: stderr: "" Nov 12 09:58:01.408: INFO: stdout: "2020-11-12T09:58:01.30525304Z I1112 09:58:01.305104 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/ldgr 313\n" Nov 12 09:58:01.408: INFO: got output "2020-11-12T09:58:01.30525304Z I1112 09:58:01.305104 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/ldgr 313\n" STEP: restricting to a time range Nov 12 09:58:03.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862 --since=1s' Nov 12 09:58:04.048: INFO: stderr: "" Nov 12 09:58:04.048: INFO: stdout: "I1112 09:58:03.105092 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/7jjz 419\nI1112 09:58:03.305098 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/mbzx 224\nI1112 09:58:03.505172 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/wzx 500\nI1112 09:58:03.705089 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/vl4 534\nI1112 09:58:03.905122 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/wgh 505\n" Nov 12 09:58:04.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1862 --since=24h' Nov 12 09:58:04.190: INFO: stderr: "" Nov 12 09:58:04.190: INFO: stdout: "I1112 09:57:59.304959 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/ljlc 312\nI1112 09:57:59.505174 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/r6z 207\nI1112 09:57:59.705131 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/c69l 253\nI1112 09:57:59.905065 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/65m 420\nI1112 09:58:00.105012 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/2nt 427\nI1112 09:58:00.305156 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/jncv 453\nI1112 09:58:00.505091 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/m72 545\nI1112 09:58:00.705222 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/jsf 381\nI1112 09:58:00.905133 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/bnq8 454\nI1112 09:58:01.105143 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/q5w 394\nI1112 09:58:01.305104 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/ldgr 313\nI1112 09:58:01.505138 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/xhrw 331\nI1112 09:58:01.705137 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/p27 563\nI1112 09:58:01.905095 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/6trk 325\nI1112 09:58:02.105110 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/7cr 334\nI1112 09:58:02.305102 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/zr6 565\nI1112 09:58:02.505154 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/mjws 206\nI1112 09:58:02.705146 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4r7 556\nI1112 09:58:02.905143 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/s7xh 364\nI1112 09:58:03.105092 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/7jjz 419\nI1112 09:58:03.305098 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/mbzx 224\nI1112 09:58:03.505172 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/wzx 500\nI1112 09:58:03.705089 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/vl4 534\nI1112 09:58:03.905122 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/wgh 505\nI1112 09:58:04.105145 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/bj9 232\n" [AfterEach] Kubectl logs /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Nov 12 09:58:04.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1862' Nov 12 09:58:07.473: INFO: stderr: "" Nov 12 09:58:07.473: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:58:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1862" for this suite. • [SLOW TEST:16.752 seconds] [sig-cli] Kubectl client /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":16,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:58:07.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 09:58:07.493: INFO: Creating ReplicaSet my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396 Nov 12 09:58:07.496: INFO: Pod name my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396: Found 0 pods out of 1 Nov 12 09:58:12.500: INFO: Pod name my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396: Found 1 pods out of 1 Nov 12 09:58:12.500: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396" is running Nov 12 09:58:18.509: INFO: Pod "my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396-dr97j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 09:58:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 09:58:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 09:58:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 09:58:07 +0000 UTC Reason: Message:}]) Nov 12 09:58:18.509: INFO: Trying to dial the pod Nov 12 09:58:23.519: INFO: Controller my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396: Got expected result from replica 1 [my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396-dr97j]: "my-hostname-basic-03378573-4122-4da5-adc4-c8d03c966396-dr97j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:58:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7500" for this suite. • [SLOW TEST:16.045 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":17,"skipped":197,"failed":0} SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:58:23.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3778 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3778 STEP: Deleting pre-stop pod Nov 12 09:58:54.570: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:58:54.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3778" for this suite. • [SLOW TEST:31.054 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":18,"skipped":203,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:58:54.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 12 09:58:55.068: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 12 09:58:57.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:58:59.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:01.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:03.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:05.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771935, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 12 09:59:08.083: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:59:08.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3419" for this suite. STEP: Destroying namespace "webhook-3419-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.568 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":19,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:59:08.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 12 09:59:09.004: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 12 09:59:11.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:13.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:15.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:17.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 09:59:19.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740771949, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 12 09:59:22.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 09:59:22.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6700" for this suite. STEP: Destroying namespace "webhook-6700-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.968 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":20,"skipped":254,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 09:59:22.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:00:22.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1937" for this suite. • [SLOW TEST:60.025 seconds] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":267,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:00:22.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 12 10:00:22.163: INFO: Waiting up to 5m0s for pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089" in namespace "emptydir-1458" to be "success or failure" Nov 12 10:00:22.164: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Pending", Reason="", readiness=false. Elapsed: 1.649324ms Nov 12 10:00:24.167: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003874931s Nov 12 10:00:26.169: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006124983s Nov 12 10:00:28.172: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00883795s Nov 12 10:00:30.174: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011196682s Nov 12 10:00:32.176: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.012993061s STEP: Saw pod success Nov 12 10:00:32.176: INFO: Pod "pod-1694cca6-6d71-4184-8142-dd7adb21b089" satisfied condition "success or failure" Nov 12 10:00:32.178: INFO: Trying to get logs from node node1 pod pod-1694cca6-6d71-4184-8142-dd7adb21b089 container test-container: STEP: delete the pod Nov 12 10:00:32.192: INFO: Waiting for pod pod-1694cca6-6d71-4184-8142-dd7adb21b089 to disappear Nov 12 10:00:32.194: INFO: Pod pod-1694cca6-6d71-4184-8142-dd7adb21b089 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:00:32.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1458" for this suite. • [SLOW TEST:10.056 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":279,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:00:32.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Nov 12 10:00:32.214: INFO: Waiting up to 5m0s for pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df" in namespace "downward-api-3613" to be "success or failure" Nov 12 10:00:32.216: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 1.687682ms Nov 12 10:00:34.219: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004303979s Nov 12 10:00:36.221: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006876753s Nov 12 10:00:38.224: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009279687s Nov 12 10:00:40.226: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011707625s Nov 12 10:00:42.229: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014381144s Nov 12 10:00:44.232: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.01740707s STEP: Saw pod success Nov 12 10:00:44.232: INFO: Pod "downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df" satisfied condition "success or failure" Nov 12 10:00:44.234: INFO: Trying to get logs from node node4 pod downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df container dapi-container: STEP: delete the pod Nov 12 10:00:44.254: INFO: Waiting for pod downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df to disappear Nov 12 10:00:44.256: INFO: Pod downward-api-567d06fc-deb4-472b-a59b-6e4c8248a5df no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:00:44.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3613" for this suite. • [SLOW TEST:12.062 seconds] [sig-node] Downward API /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:00:44.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:01:00.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5432" for this suite. • [SLOW TEST:16.067 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":24,"skipped":321,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:01:00.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 12 10:01:00.750: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 12 10:01:02.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 10:01:04.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 10:01:06.760: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 12 10:01:08.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772060, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 12 10:01:11.764: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 10:01:11.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:01:12.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5593" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:12.059 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":25,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:01:12.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Nov 12 10:01:29.422: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:01:30.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7135" for this suite. • [SLOW TEST:18.047 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":26,"skipped":368,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:01:30.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6054 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6054 to expose endpoints map[] Nov 12 10:01:30.459: INFO: successfully validated that service endpoint-test2 in namespace services-6054 exposes endpoints map[] (1.553367ms elapsed) STEP: Creating pod pod1 in namespace services-6054 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6054 to expose endpoints map[pod1:[80]] Nov 12 10:01:34.484: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.020753434s elapsed, will retry) Nov 12 10:01:39.504: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.041302268s elapsed, will retry) Nov 12 10:01:40.509: INFO: successfully validated that service endpoint-test2 in namespace services-6054 exposes endpoints map[pod1:[80]] (10.046055145s elapsed) STEP: Creating pod pod2 in namespace services-6054 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6054 to expose endpoints map[pod1:[80] pod2:[80]] Nov 12 10:01:44.542: INFO: Unexpected endpoints: found map[ed508d55-ce62-43f0-9852-d154b921f914:[80]], expected map[pod1:[80] pod2:[80]] (4.03000306s elapsed, will retry) Nov 12 10:01:49.572: INFO: Unexpected endpoints: found map[ed508d55-ce62-43f0-9852-d154b921f914:[80]], expected map[pod1:[80] pod2:[80]] (9.060050849s elapsed, will retry) Nov 12 10:01:50.578: INFO: successfully validated that service endpoint-test2 in namespace services-6054 exposes endpoints map[pod1:[80] pod2:[80]] (10.06612431s elapsed) STEP: Deleting pod pod1 in namespace services-6054 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6054 to expose endpoints map[pod2:[80]] Nov 12 10:01:50.584: INFO: successfully validated that service endpoint-test2 in namespace services-6054 exposes endpoints map[pod2:[80]] (3.029086ms elapsed) STEP: Deleting pod pod2 in namespace services-6054 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6054 to expose endpoints map[] Nov 12 10:01:51.591: INFO: successfully validated that service endpoint-test2 in namespace services-6054 exposes endpoints map[] (1.003574637s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:01:51.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6054" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.167 seconds] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":27,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:01:51.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-ad907757-46e6-4063-9859-1df9b66c94d9 in namespace container-probe-175 Nov 12 10:02:03.626: INFO: Started pod busybox-ad907757-46e6-4063-9859-1df9b66c94d9 in namespace container-probe-175 STEP: checking the pod's current state and verifying that restartCount is present Nov 12 10:02:03.628: INFO: Initial restart count of pod busybox-ad907757-46e6-4063-9859-1df9b66c94d9 is 0 Nov 12 10:02:47.689: INFO: Restart count of pod container-probe-175/busybox-ad907757-46e6-4063-9859-1df9b66c94d9 is now 1 (44.060414959s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:02:47.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-175" for this suite. • [SLOW TEST:56.095 seconds] [k8s.io] Probing container /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":417,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:02:47.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 12 10:02:47.717: INFO: Waiting up to 5m0s for pod "pod-6a658b30-f815-4cb0-9006-cc146554e028" in namespace "emptydir-924" to be "success or failure" Nov 12 10:02:47.719: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Pending", Reason="", readiness=false. Elapsed: 1.456789ms Nov 12 10:02:49.721: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004113222s Nov 12 10:02:51.724: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006420402s Nov 12 10:02:53.728: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010507891s Nov 12 10:02:55.731: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013315728s Nov 12 10:02:57.733: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015618015s STEP: Saw pod success Nov 12 10:02:57.733: INFO: Pod "pod-6a658b30-f815-4cb0-9006-cc146554e028" satisfied condition "success or failure" Nov 12 10:02:57.735: INFO: Trying to get logs from node node3 pod pod-6a658b30-f815-4cb0-9006-cc146554e028 container test-container: STEP: delete the pod Nov 12 10:02:57.750: INFO: Waiting for pod pod-6a658b30-f815-4cb0-9006-cc146554e028 to disappear Nov 12 10:02:57.752: INFO: Pod pod-6a658b30-f815-4cb0-9006-cc146554e028 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:02:57.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-924" for this suite. • [SLOW TEST:10.058 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":429,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:02:57.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 10:02:57.775: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53" in namespace "security-context-test-7015" to be "success or failure" Nov 12 10:02:57.776: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 1.512068ms Nov 12 10:02:59.779: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003939951s Nov 12 10:03:01.782: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006869454s Nov 12 10:03:03.784: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009191134s Nov 12 10:03:05.787: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011869441s Nov 12 10:03:07.790: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015231164s Nov 12 10:03:09.793: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017770611s Nov 12 10:03:09.793: INFO: Pod "busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:09.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7015" for this suite. • [SLOW TEST:12.041 seconds] [k8s.io] Security Context /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":433,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:09.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Nov 12 10:03:09.815: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 12 10:03:09.823: INFO: Waiting for terminating namespaces to be deleted... Nov 12 10:03:09.825: INFO: Logging pods the kubelet thinks is on node node1 before test Nov 12 10:03:09.838: INFO: kube-proxy-m6bqr from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container kube-proxy ready: true, restart count 1 Nov 12 10:03:09.838: INFO: nginx-proxy-node1 from kube-system started at 2020-11-12 09:44:33 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:09.838: INFO: kube-flannel-z5kqm from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:09.838: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:09.838: INFO: Container kube-flannel ready: true, restart count 3 Nov 12 10:03:09.838: INFO: nodelocaldns-kpvsh from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container node-cache ready: true, restart count 1 Nov 12 10:03:09.838: INFO: rally-7fb05275-zxqaxrdt-6b478cdcd8-vb7cv from c-rally-7fb05275-sq76uzgv started at 2020-11-12 10:02:22 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container rally-7fb05275-zxqaxrdt ready: false, restart count 0 Nov 12 10:03:09.838: INFO: kube-multus-ds-amd64-k4qcb from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:09.838: INFO: tiller-deploy-58f6ff6c77-zrmnw from kube-system started at 2020-11-12 09:47:10 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container tiller ready: true, restart count 1 Nov 12 10:03:09.838: INFO: registry-proxy-txrdh from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:09.838: INFO: busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53 from security-context-test-7015 started at 2020-11-12 10:02:57 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.838: INFO: Container busybox-readonly-false-e5e7635e-10cf-402d-8ea3-2e71186e6d53 ready: false, restart count 0 Nov 12 10:03:09.838: INFO: Logging pods the kubelet thinks is on node node2 before test Nov 12 10:03:09.850: INFO: registry-proxy-lsxh9 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.850: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:09.850: INFO: nginx-proxy-node2 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.850: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:09.850: INFO: kube-proxy-bbzk5 from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.850: INFO: Container kube-proxy ready: true, restart count 2 Nov 12 10:03:09.850: INFO: kube-flannel-gsk24 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:09.850: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:09.850: INFO: Container kube-flannel ready: true, restart count 2 Nov 12 10:03:09.850: INFO: kube-multus-ds-amd64-8cjwp from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.850: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:09.850: INFO: nodelocaldns-ss57m from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.850: INFO: Container node-cache ready: true, restart count 1 Nov 12 10:03:09.850: INFO: Logging pods the kubelet thinks is on node node3 before test Nov 12 10:03:09.856: INFO: nginx-proxy-node3 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container nginx-proxy ready: true, restart count 2 Nov 12 10:03:09.856: INFO: kube-multus-ds-amd64-vwl4k from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:09.856: INFO: nodelocaldns-jw5xn from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container node-cache ready: true, restart count 2 Nov 12 10:03:09.856: INFO: registry-proxy-njmcx from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:09.856: INFO: kube-proxy-4b76p from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container kube-proxy ready: true, restart count 2 Nov 12 10:03:09.856: INFO: kube-flannel-r9726 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:09.856: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:09.856: INFO: Container kube-flannel ready: true, restart count 1 Nov 12 10:03:09.856: INFO: registry-9pgcj from kube-system started at 2020-11-12 09:47:38 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.856: INFO: Container registry ready: true, restart count 1 Nov 12 10:03:09.856: INFO: Logging pods the kubelet thinks is on node node4 before test Nov 12 10:03:09.869: INFO: kube-proxy-qsp5l from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container kube-proxy ready: true, restart count 1 Nov 12 10:03:09.869: INFO: kube-flannel-jbkp2 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:09.869: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:09.869: INFO: Container kube-flannel ready: true, restart count 2 Nov 12 10:03:09.869: INFO: kube-multus-ds-amd64-44jqf from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:09.869: INFO: coredns-58687784f9-c4bt6 from kube-system started at 2020-11-12 09:46:39 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container coredns ready: true, restart count 1 Nov 12 10:03:09.869: INFO: nginx-proxy-node4 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:09.869: INFO: registry-proxy-zvv86 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:09.869: INFO: nodelocaldns-4cm4z from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:09.869: INFO: Container node-cache ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1646ba83342688a6], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:10.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6528" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":31,"skipped":443,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:10.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 12 10:03:10.912: INFO: Waiting up to 5m0s for pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc" in namespace "emptydir-8706" to be "success or failure" Nov 12 10:03:10.914: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.645809ms Nov 12 10:03:12.917: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004408287s Nov 12 10:03:14.919: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006959787s Nov 12 10:03:16.922: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009702509s Nov 12 10:03:18.924: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012141978s Nov 12 10:03:20.927: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014975876s STEP: Saw pod success Nov 12 10:03:20.927: INFO: Pod "pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc" satisfied condition "success or failure" Nov 12 10:03:20.929: INFO: Trying to get logs from node node4 pod pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc container test-container: STEP: delete the pod Nov 12 10:03:20.939: INFO: Waiting for pod pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc to disappear Nov 12 10:03:20.940: INFO: Pod pod-b24e0d02-f6cd-4d8a-8198-f7b2552979cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:20.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8706" for this suite. • [SLOW TEST:10.053 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":445,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:20.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 10:03:20.962: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:21.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9724" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":33,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:21.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:21.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2791" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":34,"skipped":499,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:22.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Nov 12 10:03:22.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0" in namespace "projected-8658" to be "success or failure" Nov 12 10:03:22.018: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.678317ms Nov 12 10:03:24.021: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004213023s Nov 12 10:03:26.023: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00644222s Nov 12 10:03:28.026: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009355105s Nov 12 10:03:30.029: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012267011s Nov 12 10:03:32.031: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014626583s STEP: Saw pod success Nov 12 10:03:32.031: INFO: Pod "downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0" satisfied condition "success or failure" Nov 12 10:03:32.033: INFO: Trying to get logs from node node1 pod downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0 container client-container: STEP: delete the pod Nov 12 10:03:32.043: INFO: Waiting for pod downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0 to disappear Nov 12 10:03:32.044: INFO: Pod downwardapi-volume-af3f5b35-edee-4753-83de-b3f95c1af8e0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:32.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8658" for this suite. • [SLOW TEST:10.049 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":499,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:32.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Nov 12 10:03:32.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9" in namespace "downward-api-2346" to be "success or failure" Nov 12 10:03:32.070: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.605739ms Nov 12 10:03:34.072: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004163227s Nov 12 10:03:36.075: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006879171s Nov 12 10:03:38.077: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009119663s Nov 12 10:03:40.080: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011669904s Nov 12 10:03:42.083: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014628032s Nov 12 10:03:44.085: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.017182887s Nov 12 10:03:46.088: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.019757485s STEP: Saw pod success Nov 12 10:03:46.088: INFO: Pod "downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9" satisfied condition "success or failure" Nov 12 10:03:46.090: INFO: Trying to get logs from node node1 pod downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9 container client-container: STEP: delete the pod Nov 12 10:03:46.100: INFO: Waiting for pod downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9 to disappear Nov 12 10:03:46.102: INFO: Pod downwardapi-volume-0891bf99-3eb0-436b-bba2-fbaf48f860d9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:03:46.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2346" for this suite. • [SLOW TEST:14.057 seconds] [sig-storage] Downward API volume /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":504,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:03:46.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Nov 12 10:03:46.122: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 12 10:03:46.131: INFO: Waiting for terminating namespaces to be deleted... Nov 12 10:03:46.133: INFO: Logging pods the kubelet thinks is on node node1 before test Nov 12 10:03:46.139: INFO: kube-proxy-m6bqr from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container kube-proxy ready: true, restart count 1 Nov 12 10:03:46.139: INFO: nginx-proxy-node1 from kube-system started at 2020-11-12 09:44:33 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:46.139: INFO: kube-flannel-z5kqm from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:46.139: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:46.139: INFO: Container kube-flannel ready: true, restart count 3 Nov 12 10:03:46.139: INFO: nodelocaldns-kpvsh from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container node-cache ready: true, restart count 1 Nov 12 10:03:46.139: INFO: kube-multus-ds-amd64-k4qcb from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:46.139: INFO: tiller-deploy-58f6ff6c77-zrmnw from kube-system started at 2020-11-12 09:47:10 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container tiller ready: true, restart count 1 Nov 12 10:03:46.139: INFO: registry-proxy-txrdh from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:46.139: INFO: rally-b8044704-vx1ol8f1-1 from c-rally-b8044704-y2h84mem started at 2020-11-12 10:03:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.139: INFO: Container rally-b8044704-vx1ol8f1 ready: true, restart count 0 Nov 12 10:03:46.139: INFO: Logging pods the kubelet thinks is on node node2 before test Nov 12 10:03:46.145: INFO: nginx-proxy-node2 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.145: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:46.145: INFO: kube-proxy-bbzk5 from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.145: INFO: Container kube-proxy ready: true, restart count 2 Nov 12 10:03:46.145: INFO: kube-flannel-gsk24 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:46.145: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:46.145: INFO: Container kube-flannel ready: true, restart count 2 Nov 12 10:03:46.145: INFO: kube-multus-ds-amd64-8cjwp from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.145: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:46.145: INFO: nodelocaldns-ss57m from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.145: INFO: Container node-cache ready: true, restart count 1 Nov 12 10:03:46.145: INFO: registry-proxy-lsxh9 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.145: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:46.145: INFO: Logging pods the kubelet thinks is on node node3 before test Nov 12 10:03:46.151: INFO: nginx-proxy-node3 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container nginx-proxy ready: true, restart count 2 Nov 12 10:03:46.151: INFO: kube-multus-ds-amd64-vwl4k from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:46.151: INFO: nodelocaldns-jw5xn from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container node-cache ready: true, restart count 2 Nov 12 10:03:46.151: INFO: registry-proxy-njmcx from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:46.151: INFO: rally-b8044704-vx1ol8f1-0 from c-rally-b8044704-y2h84mem started at 2020-11-12 10:03:22 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container rally-b8044704-vx1ol8f1 ready: true, restart count 0 Nov 12 10:03:46.151: INFO: kube-proxy-4b76p from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container kube-proxy ready: true, restart count 2 Nov 12 10:03:46.151: INFO: kube-flannel-r9726 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:46.151: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:46.151: INFO: Container kube-flannel ready: true, restart count 1 Nov 12 10:03:46.151: INFO: registry-9pgcj from kube-system started at 2020-11-12 09:47:38 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.151: INFO: Container registry ready: true, restart count 1 Nov 12 10:03:46.151: INFO: Logging pods the kubelet thinks is on node node4 before test Nov 12 10:03:46.157: INFO: nodelocaldns-4cm4z from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container node-cache ready: true, restart count 1 Nov 12 10:03:46.157: INFO: registry-proxy-zvv86 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container registry-proxy ready: true, restart count 1 Nov 12 10:03:46.157: INFO: kube-flannel-jbkp2 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded) Nov 12 10:03:46.157: INFO: Container install-cni ready: true, restart count 1 Nov 12 10:03:46.157: INFO: Container kube-flannel ready: true, restart count 2 Nov 12 10:03:46.157: INFO: kube-multus-ds-amd64-44jqf from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container kube-multus ready: true, restart count 1 Nov 12 10:03:46.157: INFO: coredns-58687784f9-c4bt6 from kube-system started at 2020-11-12 09:46:39 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container coredns ready: true, restart count 1 Nov 12 10:03:46.157: INFO: nginx-proxy-node4 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container nginx-proxy ready: true, restart count 1 Nov 12 10:03:46.157: INFO: kube-proxy-qsp5l from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded) Nov 12 10:03:46.157: INFO: Container kube-proxy ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c7ae06c1-e34a-4b63-bbf2-10732ce48fd2 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-c7ae06c1-e34a-4b63-bbf2-10732ce48fd2 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c7ae06c1-e34a-4b63-bbf2-10732ce48fd2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Nov 12 10:09:10.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5351" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:324.098 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":37,"skipped":523,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Nov 12 10:09:10.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Nov 12 10:09:10.237: INFO: (0) /api/v1/nodes/node4/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Nov 12 10:09:10.312: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Nov 12 10:09:10.321: INFO: Waiting for terminating namespaces to be deleted...
Nov 12 10:09:10.322: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Nov 12 10:09:10.335: INFO: kube-proxy-m6bqr from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 10:09:10.335: INFO: nginx-proxy-node1 from kube-system started at 2020-11-12 09:44:33 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 10:09:10.335: INFO: kube-flannel-z5kqm from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 10:09:10.335: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 12 10:09:10.335: INFO: nodelocaldns-kpvsh from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 10:09:10.335: INFO: kube-multus-ds-amd64-k4qcb from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 10:09:10.335: INFO: tiller-deploy-58f6ff6c77-zrmnw from kube-system started at 2020-11-12 09:47:10 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container tiller ready: true, restart count 1
Nov 12 10:09:10.335: INFO: registry-proxy-txrdh from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.335: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 10:09:10.335: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Nov 12 10:09:10.348: INFO: registry-proxy-lsxh9 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 10:09:10.348: INFO: pod4 from sched-pred-5351 started at 2020-11-12 10:03:58 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container pod4 ready: true, restart count 0
Nov 12 10:09:10.348: INFO: nginx-proxy-node2 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 10:09:10.348: INFO: kube-proxy-bbzk5 from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 10:09:10.348: INFO: kube-flannel-gsk24 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 10:09:10.348: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 10:09:10.348: INFO: kube-multus-ds-amd64-8cjwp from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 10:09:10.348: INFO: nodelocaldns-ss57m from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.348: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 10:09:10.348: INFO: 
Logging pods the kubelet thinks is on node node3 before test
Nov 12 10:09:10.361: INFO: nginx-proxy-node3 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 12 10:09:10.361: INFO: kube-multus-ds-amd64-vwl4k from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 10:09:10.361: INFO: nodelocaldns-jw5xn from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container node-cache ready: true, restart count 2
Nov 12 10:09:10.361: INFO: registry-proxy-njmcx from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 10:09:10.361: INFO: kube-proxy-4b76p from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 10:09:10.361: INFO: kube-flannel-r9726 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 10:09:10.361: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 12 10:09:10.361: INFO: registry-9pgcj from kube-system started at 2020-11-12 09:47:38 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.361: INFO: 	Container registry ready: true, restart count 1
Nov 12 10:09:10.361: INFO: 
Logging pods the kubelet thinks is on node node4 before test
Nov 12 10:09:10.369: INFO: registry-proxy-zvv86 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 10:09:10.369: INFO: nodelocaldns-4cm4z from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 10:09:10.369: INFO: kube-proxy-qsp5l from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 10:09:10.369: INFO: kube-flannel-jbkp2 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 10:09:10.369: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 10:09:10.369: INFO: kube-multus-ds-amd64-44jqf from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 10:09:10.369: INFO: coredns-58687784f9-c4bt6 from kube-system started at 2020-11-12 09:46:39 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container coredns ready: true, restart count 1
Nov 12 10:09:10.369: INFO: nginx-proxy-node4 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 10:09:10.369: INFO: 	Container nginx-proxy ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-cdcf9266-bbc1-4d35-bbb3-37eaaef7af2a 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-cdcf9266-bbc1-4d35-bbb3-37eaaef7af2a off the node node4
STEP: verifying the node doesn't have the label kubernetes.io/e2e-cdcf9266-bbc1-4d35-bbb3-37eaaef7af2a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:09:54.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8598" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:44.126 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":39,"skipped":537,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:09:54.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Nov 12 10:10:14.469: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:14.471: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:16.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:16.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:18.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:18.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:20.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:20.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:22.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:22.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:24.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:24.473: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:26.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:26.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:28.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:28.474: INFO: Pod pod-with-poststart-http-hook still exists
Nov 12 10:10:30.471: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 12 10:10:30.473: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:10:30.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6400" for this suite.

• [SLOW TEST:36.055 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":547,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:10:30.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Nov 12 10:10:30.500: INFO: Waiting up to 5m0s for pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab" in namespace "emptydir-1798" to be "success or failure"
Nov 12 10:10:30.502: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148479ms
Nov 12 10:10:32.505: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004951107s
Nov 12 10:10:34.508: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008021076s
Nov 12 10:10:36.510: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010432977s
Nov 12 10:10:38.514: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013783785s
Nov 12 10:10:40.516: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016441722s
STEP: Saw pod success
Nov 12 10:10:40.516: INFO: Pod "pod-3fce1283-56bf-48aa-8cd5-2adc441348ab" satisfied condition "success or failure"
Nov 12 10:10:40.518: INFO: Trying to get logs from node node3 pod pod-3fce1283-56bf-48aa-8cd5-2adc441348ab container test-container: 
STEP: delete the pod
Nov 12 10:10:40.529: INFO: Waiting for pod pod-3fce1283-56bf-48aa-8cd5-2adc441348ab to disappear
Nov 12 10:10:40.531: INFO: Pod pod-3fce1283-56bf-48aa-8cd5-2adc441348ab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:10:40.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1798" for this suite.

• [SLOW TEST:10.058 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":553,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:10:40.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:10:41.228: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Nov 12 10:10:43.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:10:45.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:10:47.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:10:49.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:10:51.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772641, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:10:54.243: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:10:54.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:10:55.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-335" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.875 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":42,"skipped":561,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:10:55.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:10:56.104: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:10:58.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:11:00.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:11:02.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:11:04.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:11:06.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772656, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:11:09.123: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:11:09.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3741" for this suite.
STEP: Destroying namespace "webhook-3741-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.778 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":43,"skipped":573,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:11:09.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:11:09.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba" in namespace "downward-api-5751" to be "success or failure"
Nov 12 10:11:09.213: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 1.639302ms
Nov 12 10:11:11.216: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004236214s
Nov 12 10:11:13.222: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010279796s
Nov 12 10:11:15.224: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012711205s
Nov 12 10:11:17.227: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01506541s
Nov 12 10:11:19.229: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017534809s
Nov 12 10:11:21.232: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.020225816s
STEP: Saw pod success
Nov 12 10:11:21.232: INFO: Pod "downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba" satisfied condition "success or failure"
Nov 12 10:11:21.234: INFO: Trying to get logs from node node3 pod downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba container client-container: 
STEP: delete the pod
Nov 12 10:11:21.244: INFO: Waiting for pod downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba to disappear
Nov 12 10:11:21.246: INFO: Pod downwardapi-volume-9dedad9c-aa75-4d98-be28-39ab765e1eba no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:11:21.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5751" for this suite.

• [SLOW TEST:12.060 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":584,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:11:21.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1112 10:11:31.308288      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 10:11:31.308: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:11:31.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-292" for this suite.

• [SLOW TEST:10.059 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":45,"skipped":611,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:11:31.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3883
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3883
STEP: creating replication controller externalsvc in namespace services-3883
I1112 10:11:31.339520      10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3883, replica count: 2
I1112 10:11:34.391230      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:11:37.391504      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:11:40.391760      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:11:43.392060      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Nov 12 10:11:43.402: INFO: Creating new exec pod
Nov 12 10:11:53.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3883 execpodh9ckc -- /bin/sh -x -c nslookup nodeport-service'
Nov 12 10:11:53.785: INFO: stderr: "+ nslookup nodeport-service\n"
Nov 12 10:11:53.785: INFO: stdout: "Server:\t\t169.254.25.10\nAddress:\t169.254.25.10#53\n\nnodeport-service.services-3883.svc.cluster.local\tcanonical name = externalsvc.services-3883.svc.cluster.local.\nName:\texternalsvc.services-3883.svc.cluster.local\nAddress: 10.233.6.55\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3883, will wait for the garbage collector to delete the pods
Nov 12 10:11:53.841: INFO: Deleting ReplicationController externalsvc took: 3.597654ms
Nov 12 10:11:53.941: INFO: Terminating ReplicationController externalsvc pods took: 100.242902ms
Nov 12 10:11:59.048: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:11:59.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3883" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:27.746 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":46,"skipped":639,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:11:59.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-2279/secret-test-c2fcf46c-addb-4989-b8aa-5365c36a0ff2
STEP: Creating a pod to test consume secrets
Nov 12 10:11:59.081: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969" in namespace "secrets-2279" to be "success or failure"
Nov 12 10:11:59.082: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Pending", Reason="", readiness=false. Elapsed: 1.379646ms
Nov 12 10:12:01.085: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003898238s
Nov 12 10:12:03.088: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006701461s
Nov 12 10:12:05.090: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009150488s
Nov 12 10:12:07.093: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01160584s
Nov 12 10:12:09.095: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013900751s
STEP: Saw pod success
Nov 12 10:12:09.095: INFO: Pod "pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969" satisfied condition "success or failure"
Nov 12 10:12:09.096: INFO: Trying to get logs from node node4 pod pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969 container env-test: 
STEP: delete the pod
Nov 12 10:12:09.111: INFO: Waiting for pod pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969 to disappear
Nov 12 10:12:09.112: INFO: Pod pod-configmaps-5b23446b-b724-461a-b04c-4967445bc969 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2279" for this suite.

• [SLOW TEST:10.057 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":645,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:09.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Nov 12 10:12:09.642: INFO: created pod pod-service-account-defaultsa
Nov 12 10:12:09.642: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Nov 12 10:12:09.644: INFO: created pod pod-service-account-mountsa
Nov 12 10:12:09.644: INFO: pod pod-service-account-mountsa service account token volume mount: true
Nov 12 10:12:09.646: INFO: created pod pod-service-account-nomountsa
Nov 12 10:12:09.646: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Nov 12 10:12:09.647: INFO: created pod pod-service-account-defaultsa-mountspec
Nov 12 10:12:09.647: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Nov 12 10:12:09.649: INFO: created pod pod-service-account-mountsa-mountspec
Nov 12 10:12:09.649: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Nov 12 10:12:09.651: INFO: created pod pod-service-account-nomountsa-mountspec
Nov 12 10:12:09.651: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Nov 12 10:12:09.654: INFO: created pod pod-service-account-defaultsa-nomountspec
Nov 12 10:12:09.654: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Nov 12 10:12:09.656: INFO: created pod pod-service-account-mountsa-nomountspec
Nov 12 10:12:09.656: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Nov 12 10:12:09.657: INFO: created pod pod-service-account-nomountsa-nomountspec
Nov 12 10:12:09.657: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:09.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6579" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":48,"skipped":648,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:09.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-8f290500-14ba-4bbe-800b-96d8b7641843
STEP: Creating a pod to test consume configMaps
Nov 12 10:12:09.680: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05" in namespace "projected-5627" to be "success or failure"
Nov 12 10:12:09.682: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 1.592582ms
Nov 12 10:12:11.684: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003761962s
Nov 12 10:12:13.687: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006873693s
Nov 12 10:12:15.690: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009434041s
Nov 12 10:12:17.692: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01159468s
Nov 12 10:12:19.695: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014288287s
STEP: Saw pod success
Nov 12 10:12:19.695: INFO: Pod "pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05" satisfied condition "success or failure"
Nov 12 10:12:19.697: INFO: Trying to get logs from node node4 pod pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05 container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 10:12:19.707: INFO: Waiting for pod pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05 to disappear
Nov 12 10:12:19.708: INFO: Pod pod-projected-configmaps-0186ad4c-a7bf-455a-a92c-73498d56fe05 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:19.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5627" for this suite.

• [SLOW TEST:10.051 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":682,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:19.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Nov 12 10:12:19.734: INFO: Waiting up to 5m0s for pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e" in namespace "var-expansion-9454" to be "success or failure"
Nov 12 10:12:19.736: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149778ms
Nov 12 10:12:21.738: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004803209s
Nov 12 10:12:23.741: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007413701s
Nov 12 10:12:25.744: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010029182s
Nov 12 10:12:27.746: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01275399s
Nov 12 10:12:29.749: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015325592s
STEP: Saw pod success
Nov 12 10:12:29.749: INFO: Pod "var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e" satisfied condition "success or failure"
Nov 12 10:12:29.751: INFO: Trying to get logs from node node4 pod var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e container dapi-container: 
STEP: delete the pod
Nov 12 10:12:29.760: INFO: Waiting for pod var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e to disappear
Nov 12 10:12:29.762: INFO: Pod var-expansion-c17546bb-61ac-4fa2-91fc-ed9ae73d589e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:29.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9454" for this suite.

• [SLOW TEST:10.054 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":695,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:29.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Nov 12 10:12:40.301: INFO: Successfully updated pod "pod-update-activedeadlineseconds-86cb2246-e4ef-4a57-8d35-7b150e550a52"
Nov 12 10:12:40.301: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-86cb2246-e4ef-4a57-8d35-7b150e550a52" in namespace "pods-1057" to be "terminated due to deadline exceeded"
Nov 12 10:12:40.302: INFO: Pod "pod-update-activedeadlineseconds-86cb2246-e4ef-4a57-8d35-7b150e550a52": Phase="Running", Reason="", readiness=true. Elapsed: 1.735542ms
Nov 12 10:12:42.305: INFO: Pod "pod-update-activedeadlineseconds-86cb2246-e4ef-4a57-8d35-7b150e550a52": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.004141923s
Nov 12 10:12:42.305: INFO: Pod "pod-update-activedeadlineseconds-86cb2246-e4ef-4a57-8d35-7b150e550a52" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:42.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1057" for this suite.

• [SLOW TEST:12.542 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:42.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-c0663b74-1bc0-47c2-8232-6978948a9f8d
STEP: Creating a pod to test consume configMaps
Nov 12 10:12:42.334: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a" in namespace "projected-4232" to be "success or failure"
Nov 12 10:12:42.335: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.550996ms
Nov 12 10:12:44.338: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004045196s
Nov 12 10:12:46.340: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006340985s
Nov 12 10:12:48.343: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008828841s
Nov 12 10:12:50.345: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011128156s
Nov 12 10:12:52.348: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01392548s
STEP: Saw pod success
Nov 12 10:12:52.348: INFO: Pod "pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a" satisfied condition "success or failure"
Nov 12 10:12:52.350: INFO: Trying to get logs from node node3 pod pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 10:12:52.367: INFO: Waiting for pod pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a to disappear
Nov 12 10:12:52.368: INFO: Pod pod-projected-configmaps-0b17f815-ec14-41e5-8c46-f8b542e4b50a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:52.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4232" for this suite.

• [SLOW TEST:10.062 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":734,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:52.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:12:52.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9717" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":53,"skipped":791,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:12:52.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:13:02.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8823" for this suite.

• [SLOW TEST:10.037 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":842,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:13:02.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-c6f17978-66fe-42d9-8e8c-ec00f9b52f9c
STEP: Creating a pod to test consume configMaps
Nov 12 10:13:02.460: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80" in namespace "projected-5577" to be "success or failure"
Nov 12 10:13:02.462: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Pending", Reason="", readiness=false. Elapsed: 1.709572ms
Nov 12 10:13:04.465: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00436551s
Nov 12 10:13:06.469: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008171641s
Nov 12 10:13:08.474: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013488303s
Nov 12 10:13:10.476: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016079623s
Nov 12 10:13:12.479: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018716072s
STEP: Saw pod success
Nov 12 10:13:12.479: INFO: Pod "pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80" satisfied condition "success or failure"
Nov 12 10:13:12.481: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80 container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 10:13:12.498: INFO: Waiting for pod pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80 to disappear
Nov 12 10:13:12.500: INFO: Pod pod-projected-configmaps-4f53aec6-75e7-4f88-a559-de87094d2c80 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:13:12.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5577" for this suite.

• [SLOW TEST:10.068 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":868,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:13:12.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Nov 12 10:13:12.526: INFO: Waiting up to 5m0s for pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25" in namespace "containers-2051" to be "success or failure"
Nov 12 10:13:12.527: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Pending", Reason="", readiness=false. Elapsed: 1.60118ms
Nov 12 10:13:14.531: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004907292s
Nov 12 10:13:16.533: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00722528s
Nov 12 10:13:18.536: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009818236s
Nov 12 10:13:20.538: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012626527s
Nov 12 10:13:22.541: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015553254s
STEP: Saw pod success
Nov 12 10:13:22.541: INFO: Pod "client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25" satisfied condition "success or failure"
Nov 12 10:13:22.543: INFO: Trying to get logs from node node4 pod client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25 container test-container: 
STEP: delete the pod
Nov 12 10:13:22.553: INFO: Waiting for pod client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25 to disappear
Nov 12 10:13:22.555: INFO: Pod client-containers-2ee2702b-fec0-4bd0-b69f-f097e1597f25 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:13:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2051" for this suite.

• [SLOW TEST:10.055 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":875,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:13:22.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:13:23.073: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:13:25.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:13:27.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:13:29.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:13:31.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:13:33.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740772803, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:13:36.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:13:36.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4362-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:13:37.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8529" for this suite.
STEP: Destroying namespace "webhook-8529-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.634 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":57,"skipped":881,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:13:37.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Nov 12 10:13:37.402: INFO: Pod name wrapped-volume-race-b29bee6a-6c69-4645-b5f4-bada35174a48: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b29bee6a-6c69-4645-b5f4-bada35174a48 in namespace emptydir-wrapper-4883, will wait for the garbage collector to delete the pods
Nov 12 10:14:11.518: INFO: Deleting ReplicationController wrapped-volume-race-b29bee6a-6c69-4645-b5f4-bada35174a48 took: 4.394339ms
Nov 12 10:14:11.818: INFO: Terminating ReplicationController wrapped-volume-race-b29bee6a-6c69-4645-b5f4-bada35174a48 pods took: 300.257623ms
STEP: Creating RC which spawns configmap-volume pods
Nov 12 10:14:22.528: INFO: Pod name wrapped-volume-race-71112370-d003-47ec-a221-af5f2ac51c3a: Found 0 pods out of 5
Nov 12 10:14:27.533: INFO: Pod name wrapped-volume-race-71112370-d003-47ec-a221-af5f2ac51c3a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-71112370-d003-47ec-a221-af5f2ac51c3a in namespace emptydir-wrapper-4883, will wait for the garbage collector to delete the pods
Nov 12 10:14:59.608: INFO: Deleting ReplicationController wrapped-volume-race-71112370-d003-47ec-a221-af5f2ac51c3a took: 3.91258ms
Nov 12 10:14:59.908: INFO: Terminating ReplicationController wrapped-volume-race-71112370-d003-47ec-a221-af5f2ac51c3a pods took: 300.20878ms
STEP: Creating RC which spawns configmap-volume pods
Nov 12 10:15:11.519: INFO: Pod name wrapped-volume-race-1ad27eb1-71cb-4e2b-ad62-b0937f9d6329: Found 0 pods out of 5
Nov 12 10:15:16.523: INFO: Pod name wrapped-volume-race-1ad27eb1-71cb-4e2b-ad62-b0937f9d6329: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1ad27eb1-71cb-4e2b-ad62-b0937f9d6329 in namespace emptydir-wrapper-4883, will wait for the garbage collector to delete the pods
Nov 12 10:15:48.602: INFO: Deleting ReplicationController wrapped-volume-race-1ad27eb1-71cb-4e2b-ad62-b0937f9d6329 took: 4.657758ms
Nov 12 10:15:48.902: INFO: Terminating ReplicationController wrapped-volume-race-1ad27eb1-71cb-4e2b-ad62-b0937f9d6329 pods took: 300.278247ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:16:09.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4883" for this suite.

• [SLOW TEST:151.850 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":58,"skipped":893,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:16:09.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:16:09.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3967'
Nov 12 10:16:09.207: INFO: stderr: ""
Nov 12 10:16:09.207: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Nov 12 10:16:29.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3967 -o json'
Nov 12 10:16:29.395: INFO: stderr: ""
Nov 12 10:16:29.395: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"annotations\": {\n            \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n    \\\"name\\\": \\\"default-cni-network\\\",\\n    \\\"interface\\\": \\\"eth0\\\",\\n    \\\"ips\\\": [\\n        \\\"10.244.3.23\\\"\\n    ],\\n    \\\"mac\\\": \\\"0a:58:0a:f4:03:17\\\",\\n    \\\"default\\\": true,\\n    \\\"dns\\\": {}\\n}]\"\n        },\n        \"creationTimestamp\": \"2020-11-12T10:16:09Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3967\",\n        \"resourceVersion\": \"10194\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3967/pods/e2e-test-httpd-pod\",\n        \"uid\": \"9a861857-99c1-4920-a926-b7396bcdf0de\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rf2zz\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"node2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rf2zz\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rf2zz\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-12T10:16:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-12T10:16:24Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-12T10:16:24Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-12T10:16:09Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://b25f4c996ca920eb58f494c9dd264e8702f0c65d072aa82fd689af4f842ece8c\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-11-12T10:16:24Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.0.20.14\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.3.23\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.3.23\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-11-12T10:16:09Z\"\n    }\n}\n"
STEP: replace the image in the pod
Nov 12 10:16:29.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3967'
Nov 12 10:16:29.670: INFO: stderr: ""
Nov 12 10:16:29.670: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Nov 12 10:16:29.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3967'
Nov 12 10:16:33.149: INFO: stderr: ""
Nov 12 10:16:33.149: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:16:33.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3967" for this suite.

• [SLOW TEST:24.112 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":59,"skipped":909,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:16:33.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Nov 12 10:16:33.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Nov 12 10:16:42.991: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:16:44.917: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:16:57.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1277" for this suite.

• [SLOW TEST:24.150 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":60,"skipped":944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:16:57.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:17:08.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2848" for this suite.

• [SLOW TEST:11.039 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":61,"skipped":980,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:17:08.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Nov 12 10:17:08.366: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:17:10.356: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:17:21.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7060" for this suite.

• [SLOW TEST:12.968 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":62,"skipped":985,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:17:21.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:17:21.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638" in namespace "downward-api-6981" to be "success or failure"
Nov 12 10:17:21.340: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Pending", Reason="", readiness=false. Elapsed: 1.552408ms
Nov 12 10:17:23.343: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004213082s
Nov 12 10:17:25.346: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007354539s
Nov 12 10:17:27.348: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009811391s
Nov 12 10:17:29.351: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012321084s
Nov 12 10:17:31.354: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01577492s
STEP: Saw pod success
Nov 12 10:17:31.354: INFO: Pod "downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638" satisfied condition "success or failure"
Nov 12 10:17:31.356: INFO: Trying to get logs from node node3 pod downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638 container client-container: 
STEP: delete the pod
Nov 12 10:17:31.374: INFO: Waiting for pod downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638 to disappear
Nov 12 10:17:31.376: INFO: Pod downwardapi-volume-090d9032-235d-435a-a3f8-78f91eaca638 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:17:31.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6981" for this suite.

• [SLOW TEST:10.063 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1002,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:17:31.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1112 10:18:01.923775      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 10:18:01.923: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:18:01.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8081" for this suite.

• [SLOW TEST:30.548 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":64,"skipped":1004,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:18:01.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Nov 12 10:18:01.950: INFO: Waiting up to 5m0s for pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24" in namespace "containers-6472" to be "success or failure"
Nov 12 10:18:01.952: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Pending", Reason="", readiness=false. Elapsed: 1.718563ms
Nov 12 10:18:03.955: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004868701s
Nov 12 10:18:05.958: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007706249s
Nov 12 10:18:07.960: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009957233s
Nov 12 10:18:09.963: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012703397s
Nov 12 10:18:11.966: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015495567s
STEP: Saw pod success
Nov 12 10:18:11.966: INFO: Pod "client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24" satisfied condition "success or failure"
Nov 12 10:18:11.968: INFO: Trying to get logs from node node4 pod client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24 container test-container: 
STEP: delete the pod
Nov 12 10:18:11.987: INFO: Waiting for pod client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24 to disappear
Nov 12 10:18:11.988: INFO: Pod client-containers-bd67f6db-e4fc-4a5b-89bb-3ab275b29a24 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:18:11.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6472" for this suite.

• [SLOW TEST:10.064 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1051,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:18:11.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-a8f20691-36d7-4423-9d92-757afaf210fc in namespace container-probe-6808
Nov 12 10:18:22.019: INFO: Started pod liveness-a8f20691-36d7-4423-9d92-757afaf210fc in namespace container-probe-6808
STEP: checking the pod's current state and verifying that restartCount is present
Nov 12 10:18:22.021: INFO: Initial restart count of pod liveness-a8f20691-36d7-4423-9d92-757afaf210fc is 0
Nov 12 10:18:42.049: INFO: Restart count of pod container-probe-6808/liveness-a8f20691-36d7-4423-9d92-757afaf210fc is now 1 (20.028330861s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:18:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6808" for this suite.

• [SLOW TEST:30.066 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1068,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:18:42.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:18:53.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8741" for this suite.

• [SLOW TEST:11.048 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":67,"skipped":1082,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:18:53.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:18:59.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2769" for this suite.
STEP: Destroying namespace "nsdeletetest-1026" for this suite.
Nov 12 10:18:59.173: INFO: Namespace nsdeletetest-1026 was already deleted
STEP: Destroying namespace "nsdeletetest-544" for this suite.

• [SLOW TEST:6.065 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":68,"skipped":1082,"failed":0}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:18:59.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:18:59.205: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:18:59.272: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Nov 12 10:18:59.276: INFO: Pod name sample-pod: Found 0 pods out of 1
Nov 12 10:19:04.278: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 12 10:19:16.282: INFO: Creating deployment "test-rolling-update-deployment"
Nov 12 10:19:16.284: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Nov 12 10:19:16.287: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Nov 12 10:19:18.292: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Nov 12 10:19:18.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:19:20.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:19:22.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:19:24.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773156, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:19:26.296: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Nov 12 10:19:26.304: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-2556 /apis/apps/v1/namespaces/deployment-2556/deployments/test-rolling-update-deployment 08eb33e1-63e7-4dc0-8911-aff0badbf835 10947 1 2020-11-12 10:19:16 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035463e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-12 10:19:16 +0000 UTC,LastTransitionTime:2020-11-12 10:19:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-11-12 10:19:25 +0000 UTC,LastTransitionTime:2020-11-12 10:19:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Nov 12 10:19:26.306: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-2556 /apis/apps/v1/namespaces/deployment-2556/replicasets/test-rolling-update-deployment-67cf4f6444 6eb481c0-187d-40a4-ab54-f7f16f4e6b42 10937 1 2020-11-12 10:19:16 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 08eb33e1-63e7-4dc0-8911-aff0badbf835 0xc001ff0ae7 0xc001ff0ae8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001ff0b58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:19:26.306: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Nov 12 10:19:26.307: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-2556 /apis/apps/v1/namespaces/deployment-2556/replicasets/test-rolling-update-controller 0d2f893f-ecec-4141-9e91-0f784a64267b 10945 2 2020-11-12 10:18:59 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 08eb33e1-63e7-4dc0-8911-aff0badbf835 0xc001ff0977 0xc001ff0978}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001ff0a68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:19:26.309: INFO: Pod "test-rolling-update-deployment-67cf4f6444-dbd9h" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-dbd9h test-rolling-update-deployment-67cf4f6444- deployment-2556 /api/v1/namespaces/deployment-2556/pods/test-rolling-update-deployment-67cf4f6444-dbd9h ee234c98-af72-4044-8f9c-360ba0718b36 10936 0 2020-11-12 10:19:16 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.25"
    ],
    "mac": "0a:58:0a:f4:03:19",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 6eb481c0-187d-40a4-ab54-f7f16f4e6b42 0xc001ff1397 0xc001ff1398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8v628,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8v628,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8v628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:19:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:19:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:19:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:19:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.14,PodIP:10.244.3.25,StartTime:2020-11-12 10:19:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:19:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://09ae70d3b91c615811fd8d6e787b9ab20cff1219e9878eaf5896d96e197d5ffb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:19:26.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2556" for this suite.

• [SLOW TEST:27.056 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":70,"skipped":1101,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:19:26.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-7c7c0a6a-1662-4639-a8a6-5f007d0739a7
STEP: Creating a pod to test consume configMaps
Nov 12 10:19:26.336: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f" in namespace "configmap-5970" to be "success or failure"
Nov 12 10:19:26.338: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065093ms
Nov 12 10:19:28.341: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004633359s
Nov 12 10:19:30.343: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007137301s
Nov 12 10:19:32.345: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009367483s
Nov 12 10:19:34.348: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012132093s
Nov 12 10:19:36.351: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014796419s
STEP: Saw pod success
Nov 12 10:19:36.351: INFO: Pod "pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f" satisfied condition "success or failure"
Nov 12 10:19:36.353: INFO: Trying to get logs from node node3 pod pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f container configmap-volume-test: 
STEP: delete the pod
Nov 12 10:19:36.369: INFO: Waiting for pod pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f to disappear
Nov 12 10:19:36.370: INFO: Pod pod-configmaps-d4b25d08-62c8-4730-9d01-15bf1b73260f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:19:36.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5970" for this suite.

• [SLOW TEST:10.062 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1112,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:19:36.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1540
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-1540
I1112 10:19:36.402260      10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1540, replica count: 2
I1112 10:19:39.452907      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:19:42.453210      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:19:45.453403      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:19:48.453651      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 12 10:19:48.453: INFO: Creating new exec pod
Nov 12 10:19:59.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1540 execpod2g6h9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Nov 12 10:19:59.733: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Nov 12 10:19:59.733: INFO: stdout: ""
Nov 12 10:19:59.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1540 execpod2g6h9 -- /bin/sh -x -c nc -zv -t -w 2 10.233.46.102 80'
Nov 12 10:19:59.964: INFO: stderr: "+ nc -zv -t -w 2 10.233.46.102 80\nConnection to 10.233.46.102 80 port [tcp/http] succeeded!\n"
Nov 12 10:19:59.964: INFO: stdout: ""
Nov 12 10:19:59.964: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:19:59.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1540" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.601 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":72,"skipped":1112,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:19:59.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:19:59.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Nov 12 10:20:02.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2130 create -f -'
Nov 12 10:20:02.803: INFO: stderr: ""
Nov 12 10:20:02.803: INFO: stdout: "e2e-test-crd-publish-openapi-7110-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Nov 12 10:20:02.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2130 delete e2e-test-crd-publish-openapi-7110-crds test-cr'
Nov 12 10:20:02.924: INFO: stderr: ""
Nov 12 10:20:02.924: INFO: stdout: "e2e-test-crd-publish-openapi-7110-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Nov 12 10:20:02.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2130 apply -f -'
Nov 12 10:20:03.137: INFO: stderr: ""
Nov 12 10:20:03.137: INFO: stdout: "e2e-test-crd-publish-openapi-7110-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Nov 12 10:20:03.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2130 delete e2e-test-crd-publish-openapi-7110-crds test-cr'
Nov 12 10:20:03.292: INFO: stderr: ""
Nov 12 10:20:03.292: INFO: stdout: "e2e-test-crd-publish-openapi-7110-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Nov 12 10:20:03.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7110-crds'
Nov 12 10:20:03.536: INFO: stderr: ""
Nov 12 10:20:03.536: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7110-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:20:05.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2130" for this suite.

• [SLOW TEST:6.013 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":73,"skipped":1114,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:20:05.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1112 10:20:46.028462      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 10:20:46.028: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:20:46.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2306" for this suite.

• [SLOW TEST:40.041 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":74,"skipped":1118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:20:46.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:20:46.388: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:20:48.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:20:50.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:20:52.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:20:54.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:20:56.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:20:58.398: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740773246, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:21:01.402: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:21:11.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9037" for this suite.
STEP: Destroying namespace "webhook-9037-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:25.471 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":75,"skipped":1158,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:21:11.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Nov 12 10:21:11.523: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:21:29.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8074" for this suite.

• [SLOW TEST:17.607 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1168,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:21:29.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Nov 12 10:21:29.130: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:21:29.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5230" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":77,"skipped":1181,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:21:29.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Nov 12 10:21:29.265: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:21:41.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5437" for this suite.

• [SLOW TEST:11.877 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":78,"skipped":1195,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:21:41.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:21:41.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214" in namespace "downward-api-1309" to be "success or failure"
Nov 12 10:21:41.148: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Pending", Reason="", readiness=false. Elapsed: 1.632532ms
Nov 12 10:21:43.159: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012388788s
Nov 12 10:21:45.162: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015480876s
Nov 12 10:21:47.164: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017965558s
Nov 12 10:21:49.167: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020693072s
Nov 12 10:21:51.171: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.024387996s
STEP: Saw pod success
Nov 12 10:21:51.171: INFO: Pod "downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214" satisfied condition "success or failure"
Nov 12 10:21:51.173: INFO: Trying to get logs from node node1 pod downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214 container client-container: 
STEP: delete the pod
Nov 12 10:21:51.190: INFO: Waiting for pod downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214 to disappear
Nov 12 10:21:51.191: INFO: Pod downwardapi-volume-9bd1cb23-f80e-4c67-b2bf-739b4625e214 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:21:51.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1309" for this suite.

• [SLOW TEST:10.068 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1208,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:21:51.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-3c913288-93fa-4f5e-b492-05f981f28329
STEP: Creating a pod to test consume secrets
Nov 12 10:21:51.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5" in namespace "projected-613" to be "success or failure"
Nov 12 10:21:51.219: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.485979ms
Nov 12 10:21:53.221: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004021492s
Nov 12 10:21:55.224: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006833195s
Nov 12 10:21:57.227: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009283262s
Nov 12 10:21:59.230: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012636609s
Nov 12 10:22:01.232: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01497152s
STEP: Saw pod success
Nov 12 10:22:01.232: INFO: Pod "pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5" satisfied condition "success or failure"
Nov 12 10:22:01.235: INFO: Trying to get logs from node node3 pod pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5 container projected-secret-volume-test: 
STEP: delete the pod
Nov 12 10:22:01.251: INFO: Waiting for pod pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5 to disappear
Nov 12 10:22:01.252: INFO: Pod pod-projected-secrets-f5ff58c0-c1f1-4154-9ca9-46e50ee3f2c5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:01.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-613" for this suite.

• [SLOW TEST:10.060 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1264,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:01.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Nov 12 10:22:01.274: INFO: namespace kubectl-3040
Nov 12 10:22:01.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3040'
Nov 12 10:22:01.496: INFO: stderr: ""
Nov 12 10:22:01.496: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Nov 12 10:22:02.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:02.499: INFO: Found 0 / 1
Nov 12 10:22:03.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:03.499: INFO: Found 0 / 1
Nov 12 10:22:04.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:04.499: INFO: Found 0 / 1
Nov 12 10:22:05.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:05.499: INFO: Found 0 / 1
Nov 12 10:22:06.500: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:06.500: INFO: Found 0 / 1
Nov 12 10:22:07.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:07.499: INFO: Found 0 / 1
Nov 12 10:22:08.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:08.499: INFO: Found 0 / 1
Nov 12 10:22:09.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:09.499: INFO: Found 0 / 1
Nov 12 10:22:10.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:10.499: INFO: Found 0 / 1
Nov 12 10:22:11.499: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:11.499: INFO: Found 1 / 1
Nov 12 10:22:11.499: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Nov 12 10:22:11.501: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:22:11.501: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Nov 12 10:22:11.501: INFO: wait on agnhost-master startup in kubectl-3040 
Nov 12 10:22:11.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-wdqjx agnhost-master --namespace=kubectl-3040'
Nov 12 10:22:11.642: INFO: stderr: ""
Nov 12 10:22:11.642: INFO: stdout: "Paused\n"
STEP: exposing RC
Nov 12 10:22:11.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3040'
Nov 12 10:22:11.788: INFO: stderr: ""
Nov 12 10:22:11.788: INFO: stdout: "service/rm2 exposed\n"
Nov 12 10:22:11.790: INFO: Service rm2 in namespace kubectl-3040 found.
STEP: exposing service
Nov 12 10:22:13.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3040'
Nov 12 10:22:13.927: INFO: stderr: ""
Nov 12 10:22:13.927: INFO: stdout: "service/rm3 exposed\n"
Nov 12 10:22:13.928: INFO: Service rm3 in namespace kubectl-3040 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:15.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3040" for this suite.

• [SLOW TEST:14.681 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
    should create services for rc  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":81,"skipped":1265,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:15.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:22:15.956: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:17.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1756" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":82,"skipped":1267,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:17.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:22:17.086: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:22.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-645" for this suite.

• [SLOW TEST:5.641 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":83,"skipped":1292,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:22.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:44.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7230" for this suite.

• [SLOW TEST:22.024 seconds]
[sig-apps] Job
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":84,"skipped":1301,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:44.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1112 10:22:54.772004      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 10:22:54.772: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:22:54.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2958" for this suite.

• [SLOW TEST:10.039 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":85,"skipped":1304,"failed":0}
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:22:54.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:22:54.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6154'
Nov 12 10:22:54.930: INFO: stderr: ""
Nov 12 10:22:54.930: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Nov 12 10:22:54.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6154'
Nov 12 10:23:08.750: INFO: stderr: ""
Nov 12 10:23:08.750: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:23:08.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6154" for this suite.

• [SLOW TEST:13.978 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":86,"skipped":1304,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:23:08.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Nov 12 10:23:08.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7864 /api/v1/namespaces/watch-7864/configmaps/e2e-watch-test-watch-closed 86b19481-2123-465a-aa97-ce9eb5fd7792 12285 0 2020-11-12 10:23:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 12 10:23:08.782: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7864 /api/v1/namespaces/watch-7864/configmaps/e2e-watch-test-watch-closed 86b19481-2123-465a-aa97-ce9eb5fd7792 12286 0 2020-11-12 10:23:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Nov 12 10:23:08.790: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7864 /api/v1/namespaces/watch-7864/configmaps/e2e-watch-test-watch-closed 86b19481-2123-465a-aa97-ce9eb5fd7792 12287 0 2020-11-12 10:23:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 12 10:23:08.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7864 /api/v1/namespaces/watch-7864/configmaps/e2e-watch-test-watch-closed 86b19481-2123-465a-aa97-ce9eb5fd7792 12288 0 2020-11-12 10:23:08 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:23:08.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7864" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":87,"skipped":1350,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:23:08.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-251de5c6-660a-42c5-95e3-d68a10435fbf
STEP: Creating a pod to test consume secrets
Nov 12 10:23:08.813: INFO: Waiting up to 5m0s for pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48" in namespace "secrets-661" to be "success or failure"
Nov 12 10:23:08.814: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Pending", Reason="", readiness=false. Elapsed: 1.456887ms
Nov 12 10:23:10.817: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003771215s
Nov 12 10:23:12.819: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006390574s
Nov 12 10:23:14.822: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009015915s
Nov 12 10:23:16.824: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011547898s
Nov 12 10:23:18.827: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014540259s
STEP: Saw pod success
Nov 12 10:23:18.827: INFO: Pod "pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48" satisfied condition "success or failure"
Nov 12 10:23:18.830: INFO: Trying to get logs from node node2 pod pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48 container secret-volume-test: 
STEP: delete the pod
Nov 12 10:23:18.848: INFO: Waiting for pod pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48 to disappear
Nov 12 10:23:18.851: INFO: Pod pod-secrets-b7790946-7a1c-4406-83ee-7ef04f51bb48 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:23:18.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-661" for this suite.

• [SLOW TEST:10.062 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:23:18.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5334
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Nov 12 10:23:18.881: INFO: Found 0 stateful pods, waiting for 3
Nov 12 10:23:28.884: INFO: Found 1 stateful pods, waiting for 3
Nov 12 10:23:38.884: INFO: Found 2 stateful pods, waiting for 3
Nov 12 10:23:48.884: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:23:48.884: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:23:48.884: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Nov 12 10:23:58.886: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:23:58.886: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:23:58.886: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Nov 12 10:23:58.909: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Nov 12 10:24:08.933: INFO: Updating stateful set ss2
Nov 12 10:24:08.937: INFO: Waiting for Pod statefulset-5334/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 10:24:18.942: INFO: Waiting for Pod statefulset-5334/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Nov 12 10:24:28.953: INFO: Found 2 stateful pods, waiting for 3
Nov 12 10:24:38.956: INFO: Found 2 stateful pods, waiting for 3
Nov 12 10:24:48.956: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:24:48.956: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:24:48.956: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Nov 12 10:24:58.956: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:24:58.956: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:24:58.956: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Nov 12 10:24:58.975: INFO: Updating stateful set ss2
Nov 12 10:24:58.979: INFO: Waiting for Pod statefulset-5334/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 10:25:08.999: INFO: Updating stateful set ss2
Nov 12 10:25:09.002: INFO: Waiting for StatefulSet statefulset-5334/ss2 to complete update
Nov 12 10:25:09.002: INFO: Waiting for Pod statefulset-5334/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 10:25:19.008: INFO: Waiting for StatefulSet statefulset-5334/ss2 to complete update
Nov 12 10:25:19.008: INFO: Waiting for Pod statefulset-5334/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 10:25:29.008: INFO: Waiting for StatefulSet statefulset-5334/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 10:25:39.008: INFO: Deleting all statefulset in ns statefulset-5334
Nov 12 10:25:39.010: INFO: Scaling statefulset ss2 to 0
Nov 12 10:26:09.027: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 10:26:09.030: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:26:09.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5334" for this suite.

• [SLOW TEST:170.186 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":89,"skipped":1390,"failed":0}
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:26:09.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-600e015e-82c6-400d-882a-8e17de044bed in namespace container-probe-8600
Nov 12 10:26:19.071: INFO: Started pod busybox-600e015e-82c6-400d-882a-8e17de044bed in namespace container-probe-8600
STEP: checking the pod's current state and verifying that restartCount is present
Nov 12 10:26:19.074: INFO: Initial restart count of pod busybox-600e015e-82c6-400d-882a-8e17de044bed is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:30:19.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8600" for this suite.

• [SLOW TEST:250.364 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:30:19.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-0fdae269-5aee-4629-a00d-3a3424826320
STEP: Creating a pod to test consume secrets
Nov 12 10:30:19.429: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9" in namespace "projected-6549" to be "success or failure"
Nov 12 10:30:19.431: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.621849ms
Nov 12 10:30:21.433: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004412205s
Nov 12 10:30:23.437: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007573826s
Nov 12 10:30:25.440: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011017641s
Nov 12 10:30:27.442: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013494612s
Nov 12 10:30:29.446: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017033313s
STEP: Saw pod success
Nov 12 10:30:29.446: INFO: Pod "pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9" satisfied condition "success or failure"
Nov 12 10:30:29.449: INFO: Trying to get logs from node node3 pod pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9 container projected-secret-volume-test: 
STEP: delete the pod
Nov 12 10:30:29.466: INFO: Waiting for pod pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9 to disappear
Nov 12 10:30:29.468: INFO: Pod pod-projected-secrets-f65a2de2-7191-4414-afa3-921bedfd2da9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:30:29.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6549" for this suite.

• [SLOW TEST:10.063 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1427,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:30:29.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Nov 12 10:30:29.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2212 /api/v1/namespaces/watch-2212/configmaps/e2e-watch-test-resource-version 9167148f-c254-4b89-812a-bc37097f6633 13596 0 2020-11-12 10:30:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 12 10:30:29.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2212 /api/v1/namespaces/watch-2212/configmaps/e2e-watch-test-resource-version 9167148f-c254-4b89-812a-bc37097f6633 13597 0 2020-11-12 10:30:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:30:29.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2212" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":92,"skipped":1452,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:30:29.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2666 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2666;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2666 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2666;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2666.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2666.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2666.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2666.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2666.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2666.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2666.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2666.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2666.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 226.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.226_udp@PTR;check="$$(dig +tcp +noall +answer +search 226.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.226_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2666 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2666;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2666 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2666;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2666.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2666.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2666.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2666.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2666.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2666.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2666.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2666.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2666.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2666.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 226.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.226_udp@PTR;check="$$(dig +tcp +noall +answer +search 226.46.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.46.226_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 10:30:53.538: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.541: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-2666 from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2666 from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.548: INFO: Unable to read wheezy_udp@dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.550: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.553: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.555: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.571: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.573: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.575: INFO: Unable to read jessie_udp@dns-test-service.dns-2666 from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.578: INFO: Unable to read jessie_tcp@dns-test-service.dns-2666 from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.580: INFO: Unable to read jessie_udp@dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.582: INFO: Unable to read jessie_tcp@dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.584: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.586: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2666.svc from pod dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004: the server could not find the requested resource (get pods dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004)
Nov 12 10:30:53.599: INFO: Lookups using dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2666 wheezy_tcp@dns-test-service.dns-2666 wheezy_udp@dns-test-service.dns-2666.svc wheezy_tcp@dns-test-service.dns-2666.svc wheezy_udp@_http._tcp.dns-test-service.dns-2666.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2666.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2666 jessie_tcp@dns-test-service.dns-2666 jessie_udp@dns-test-service.dns-2666.svc jessie_tcp@dns-test-service.dns-2666.svc jessie_udp@_http._tcp.dns-test-service.dns-2666.svc jessie_tcp@_http._tcp.dns-test-service.dns-2666.svc]

Nov 12 10:30:58.670: INFO: DNS probes using dns-2666/dns-test-c9780ba3-d761-44fa-b7d3-6799cf934004 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:30:58.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2666" for this suite.

• [SLOW TEST:29.185 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":93,"skipped":1455,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:30:58.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:31:45.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2337" for this suite.

• [SLOW TEST:47.198 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1456,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:31:45.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:32:02.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-444" for this suite.

• [SLOW TEST:17.047 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":95,"skipped":1518,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:32:02.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:32:02.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6335'
Nov 12 10:32:03.195: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 12 10:32:03.196: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Nov 12 10:32:05.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6335'
Nov 12 10:32:05.342: INFO: stderr: ""
Nov 12 10:32:05.342: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:32:05.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6335" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":96,"skipped":1519,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:32:05.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-b8wn
STEP: Creating a pod to test atomic-volume-subpath
Nov 12 10:32:05.371: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-b8wn" in namespace "subpath-725" to be "success or failure"
Nov 12 10:32:05.373: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 1.524142ms
Nov 12 10:32:07.375: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003891421s
Nov 12 10:32:09.378: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006380492s
Nov 12 10:32:11.380: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009039437s
Nov 12 10:32:13.383: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011954077s
Nov 12 10:32:15.386: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 10.014436897s
Nov 12 10:32:17.388: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 12.017246051s
Nov 12 10:32:19.391: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 14.019926063s
Nov 12 10:32:21.395: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 16.023639517s
Nov 12 10:32:23.399: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 18.027334584s
Nov 12 10:32:25.401: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 20.029722548s
Nov 12 10:32:27.404: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 22.032548299s
Nov 12 10:32:29.406: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 24.034979306s
Nov 12 10:32:31.409: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 26.037665763s
Nov 12 10:32:33.411: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Running", Reason="", readiness=true. Elapsed: 28.040126095s
Nov 12 10:32:35.414: INFO: Pod "pod-subpath-test-secret-b8wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.042740385s
STEP: Saw pod success
Nov 12 10:32:35.414: INFO: Pod "pod-subpath-test-secret-b8wn" satisfied condition "success or failure"
Nov 12 10:32:35.416: INFO: Trying to get logs from node node4 pod pod-subpath-test-secret-b8wn container test-container-subpath-secret-b8wn: 
STEP: delete the pod
Nov 12 10:32:35.434: INFO: Waiting for pod pod-subpath-test-secret-b8wn to disappear
Nov 12 10:32:35.435: INFO: Pod pod-subpath-test-secret-b8wn no longer exists
STEP: Deleting pod pod-subpath-test-secret-b8wn
Nov 12 10:32:35.435: INFO: Deleting pod "pod-subpath-test-secret-b8wn" in namespace "subpath-725"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:32:35.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-725" for this suite.

• [SLOW TEST:30.095 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1520,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:32:35.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-rndl
STEP: Creating a pod to test atomic-volume-subpath
Nov 12 10:32:35.465: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rndl" in namespace "subpath-9255" to be "success or failure"
Nov 12 10:32:35.467: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Pending", Reason="", readiness=false. Elapsed: 1.527488ms
Nov 12 10:32:37.469: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004253538s
Nov 12 10:32:39.472: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007169929s
Nov 12 10:32:41.475: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010120871s
Nov 12 10:32:43.477: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012265008s
Nov 12 10:32:45.480: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 10.014519556s
Nov 12 10:32:47.483: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 12.017632899s
Nov 12 10:32:49.486: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 14.020518898s
Nov 12 10:32:51.489: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 16.02345087s
Nov 12 10:32:53.491: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 18.025752807s
Nov 12 10:32:55.494: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 20.028775635s
Nov 12 10:32:57.497: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 22.031529406s
Nov 12 10:32:59.499: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 24.034192953s
Nov 12 10:33:01.502: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 26.036684485s
Nov 12 10:33:03.504: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Running", Reason="", readiness=true. Elapsed: 28.039087684s
Nov 12 10:33:05.507: INFO: Pod "pod-subpath-test-projected-rndl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.041319779s
STEP: Saw pod success
Nov 12 10:33:05.507: INFO: Pod "pod-subpath-test-projected-rndl" satisfied condition "success or failure"
Nov 12 10:33:05.508: INFO: Trying to get logs from node node2 pod pod-subpath-test-projected-rndl container test-container-subpath-projected-rndl: 
STEP: delete the pod
Nov 12 10:33:05.525: INFO: Waiting for pod pod-subpath-test-projected-rndl to disappear
Nov 12 10:33:05.527: INFO: Pod pod-subpath-test-projected-rndl no longer exists
STEP: Deleting pod pod-subpath-test-projected-rndl
Nov 12 10:33:05.527: INFO: Deleting pod "pod-subpath-test-projected-rndl" in namespace "subpath-9255"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:33:05.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9255" for this suite.

• [SLOW TEST:30.092 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":98,"skipped":1530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:33:05.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7608.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7608.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7608.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7608.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7608.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 10:33:27.584: INFO: DNS probes using dns-7608/dns-test-51fc1e0f-b0e1-479d-b214-a3b0421c3548 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:33:27.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7608" for this suite.

• [SLOW TEST:22.064 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":99,"skipped":1566,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:33:27.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Nov 12 10:33:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7234 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Nov 12 10:33:37.434: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Nov 12 10:33:37.434: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:33:39.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7234" for this suite.

• [SLOW TEST:11.847 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":100,"skipped":1580,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:33:39.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1112 10:33:45.478588      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 10:33:45.478: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:33:45.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3837" for this suite.

• [SLOW TEST:6.038 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":101,"skipped":1585,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:33:45.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:33:45.504: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587" in namespace "projected-5225" to be "success or failure"
Nov 12 10:33:45.506: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Pending", Reason="", readiness=false. Elapsed: 1.561357ms
Nov 12 10:33:47.510: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005932304s
Nov 12 10:33:49.513: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008491884s
Nov 12 10:33:51.515: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011214411s
Nov 12 10:33:53.518: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014041911s
Nov 12 10:33:55.520: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016255662s
STEP: Saw pod success
Nov 12 10:33:55.520: INFO: Pod "downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587" satisfied condition "success or failure"
Nov 12 10:33:55.523: INFO: Trying to get logs from node node1 pod downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587 container client-container: 
STEP: delete the pod
Nov 12 10:33:55.538: INFO: Waiting for pod downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587 to disappear
Nov 12 10:33:55.540: INFO: Pod downwardapi-volume-c26aeddc-3851-44cb-a287-09481d118587 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:33:55.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5225" for this suite.

• [SLOW TEST:10.060 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1622,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:33:55.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:33:55.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:34:05.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-71" for this suite.

• [SLOW TEST:10.049 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1625,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:34:05.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Nov 12 10:34:05.608: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the sample API server.
Nov 12 10:34:06.139: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Nov 12 10:34:08.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:10.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:12.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:14.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:16.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:18.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:20.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:22.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:24.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774046, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:34:28.983: INFO: Waited 2.810724665s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:34:29.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-896" for this suite.

• [SLOW TEST:23.929 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":104,"skipped":1635,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:34:29.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-8ca70c84-8980-4f99-a5f0-17637f2b9cb9
STEP: Creating a pod to test consume secrets
Nov 12 10:34:29.540: INFO: Waiting up to 5m0s for pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8" in namespace "secrets-4357" to be "success or failure"
Nov 12 10:34:29.542: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.30008ms
Nov 12 10:34:31.544: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003769327s
Nov 12 10:34:33.547: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006527505s
Nov 12 10:34:35.550: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009153709s
Nov 12 10:34:37.552: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011174478s
Nov 12 10:34:39.554: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013545331s
STEP: Saw pod success
Nov 12 10:34:39.554: INFO: Pod "pod-secrets-650f9158-6672-429b-8965-229e665bd1e8" satisfied condition "success or failure"
Nov 12 10:34:39.556: INFO: Trying to get logs from node node3 pod pod-secrets-650f9158-6672-429b-8965-229e665bd1e8 container secret-volume-test: 
STEP: delete the pod
Nov 12 10:34:39.572: INFO: Waiting for pod pod-secrets-650f9158-6672-429b-8965-229e665bd1e8 to disappear
Nov 12 10:34:39.574: INFO: Pod pod-secrets-650f9158-6672-429b-8965-229e665bd1e8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:34:39.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4357" for this suite.

• [SLOW TEST:10.057 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1637,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:34:39.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-wl7w
STEP: Creating a pod to test atomic-volume-subpath
Nov 12 10:34:39.602: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wl7w" in namespace "subpath-5077" to be "success or failure"
Nov 12 10:34:39.603: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 1.555422ms
Nov 12 10:34:41.606: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003949472s
Nov 12 10:34:43.608: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006377513s
Nov 12 10:34:45.612: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010070341s
Nov 12 10:34:47.615: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013273931s
Nov 12 10:34:49.618: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 10.015963122s
Nov 12 10:34:51.620: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 12.018501143s
Nov 12 10:34:53.623: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 14.021418723s
Nov 12 10:34:55.626: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 16.024247799s
Nov 12 10:34:57.629: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 18.027100543s
Nov 12 10:34:59.631: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 20.029366131s
Nov 12 10:35:01.634: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 22.032131304s
Nov 12 10:35:03.637: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 24.034952055s
Nov 12 10:35:05.639: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 26.037507295s
Nov 12 10:35:07.642: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 28.040733957s
Nov 12 10:35:09.645: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Running", Reason="", readiness=true. Elapsed: 30.043311723s
Nov 12 10:35:11.647: INFO: Pod "pod-subpath-test-downwardapi-wl7w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.045795618s
STEP: Saw pod success
Nov 12 10:35:11.647: INFO: Pod "pod-subpath-test-downwardapi-wl7w" satisfied condition "success or failure"
Nov 12 10:35:11.649: INFO: Trying to get logs from node node3 pod pod-subpath-test-downwardapi-wl7w container test-container-subpath-downwardapi-wl7w: 
STEP: delete the pod
Nov 12 10:35:11.660: INFO: Waiting for pod pod-subpath-test-downwardapi-wl7w to disappear
Nov 12 10:35:11.662: INFO: Pod pod-subpath-test-downwardapi-wl7w no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-wl7w
Nov 12 10:35:11.662: INFO: Deleting pod "pod-subpath-test-downwardapi-wl7w" in namespace "subpath-5077"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:35:11.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5077" for this suite.

• [SLOW TEST:32.089 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":106,"skipped":1647,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:35:11.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-63ec44fa-62fa-4d7b-bc68-c71d5dfdf3af
STEP: Creating a pod to test consume secrets
Nov 12 10:35:11.691: INFO: Waiting up to 5m0s for pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a" in namespace "secrets-9541" to be "success or failure"
Nov 12 10:35:11.692: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.580631ms
Nov 12 10:35:13.695: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004129534s
Nov 12 10:35:15.697: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006148135s
Nov 12 10:35:17.702: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011180012s
Nov 12 10:35:19.704: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013255575s
Nov 12 10:35:21.706: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015725989s
STEP: Saw pod success
Nov 12 10:35:21.706: INFO: Pod "pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a" satisfied condition "success or failure"
Nov 12 10:35:21.708: INFO: Trying to get logs from node node4 pod pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a container secret-volume-test: 
STEP: delete the pod
Nov 12 10:35:21.726: INFO: Waiting for pod pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a to disappear
Nov 12 10:35:21.728: INFO: Pod pod-secrets-229111b6-9491-43b7-a6d8-62eca1c0a87a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:35:21.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9541" for this suite.

• [SLOW TEST:10.064 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1664,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:35:21.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3821.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3821.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3821.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3821.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3821.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3821.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 10:35:43.765: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.768: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.770: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.773: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.780: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.782: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.785: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.787: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3821.svc.cluster.local from pod dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43: the server could not find the requested resource (get pods dns-test-57292226-b047-4872-af93-74349b421a43)
Nov 12 10:35:43.792: INFO: Lookups using dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3821.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3821.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local jessie_udp@dns-test-service-2.dns-3821.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3821.svc.cluster.local]

Nov 12 10:35:48.820: INFO: DNS probes using dns-3821/dns-test-57292226-b047-4872-af93-74349b421a43 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:35:48.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3821" for this suite.

• [SLOW TEST:27.105 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":108,"skipped":1668,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:35:48.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-5252/configmap-test-b7deee6a-1d54-40dd-97aa-6d70884a8f35
STEP: Creating a pod to test consume configMaps
Nov 12 10:35:48.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf" in namespace "configmap-5252" to be "success or failure"
Nov 12 10:35:48.862: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.70811ms
Nov 12 10:35:50.864: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00423754s
Nov 12 10:35:52.867: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006597881s
Nov 12 10:35:54.870: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0097889s
Nov 12 10:35:56.872: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012227157s
Nov 12 10:35:58.875: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014615394s
STEP: Saw pod success
Nov 12 10:35:58.875: INFO: Pod "pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf" satisfied condition "success or failure"
Nov 12 10:35:58.876: INFO: Trying to get logs from node node1 pod pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf container env-test: 
STEP: delete the pod
Nov 12 10:35:58.891: INFO: Waiting for pod pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf to disappear
Nov 12 10:35:58.893: INFO: Pod pod-configmaps-65742f90-4600-48ad-8393-7059aacda9bf no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:35:58.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5252" for this suite.

• [SLOW TEST:10.060 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1691,"failed":0}
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:35:58.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-3691/configmap-test-3542a907-ad0a-4a52-aed0-237c9d13beb2
STEP: Creating a pod to test consume configMaps
Nov 12 10:35:58.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930" in namespace "configmap-3691" to be "success or failure"
Nov 12 10:35:58.922: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086365ms
Nov 12 10:36:00.924: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004453304s
Nov 12 10:36:02.927: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00690231s
Nov 12 10:36:04.930: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00985553s
Nov 12 10:36:06.932: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0123765s
Nov 12 10:36:08.935: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014701249s
STEP: Saw pod success
Nov 12 10:36:08.935: INFO: Pod "pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930" satisfied condition "success or failure"
Nov 12 10:36:08.936: INFO: Trying to get logs from node node4 pod pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930 container env-test: 
STEP: delete the pod
Nov 12 10:36:08.946: INFO: Waiting for pod pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930 to disappear
Nov 12 10:36:08.947: INFO: Pod pod-configmaps-93e6112a-03fb-4f3f-859f-bf8a993dc930 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:36:08.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3691" for this suite.

• [SLOW TEST:10.053 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1691,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:36:08.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8902
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8902
I1112 10:36:08.978594      10 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8902, replica count: 2
I1112 10:36:12.029086      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:36:15.029344      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:36:18.029949      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:36:21.030209      10 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 12 10:36:21.030: INFO: Creating new exec pod
Nov 12 10:36:32.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8902 execpodz7ljw -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Nov 12 10:36:32.269: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Nov 12 10:36:32.269: INFO: stdout: ""
Nov 12 10:36:32.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8902 execpodz7ljw -- /bin/sh -x -c nc -zv -t -w 2 10.233.7.178 80'
Nov 12 10:36:32.512: INFO: stderr: "+ nc -zv -t -w 2 10.233.7.178 80\nConnection to 10.233.7.178 80 port [tcp/http] succeeded!\n"
Nov 12 10:36:32.512: INFO: stdout: ""
Nov 12 10:36:32.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8902 execpodz7ljw -- /bin/sh -x -c nc -zv -t -w 2 10.0.20.13 32539'
Nov 12 10:36:32.738: INFO: stderr: "+ nc -zv -t -w 2 10.0.20.13 32539\nConnection to 10.0.20.13 32539 port [tcp/32539] succeeded!\n"
Nov 12 10:36:32.738: INFO: stdout: ""
Nov 12 10:36:32.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8902 execpodz7ljw -- /bin/sh -x -c nc -zv -t -w 2 10.0.20.14 32539'
Nov 12 10:36:32.956: INFO: stderr: "+ nc -zv -t -w 2 10.0.20.14 32539\nConnection to 10.0.20.14 32539 port [tcp/32539] succeeded!\n"
Nov 12 10:36:32.956: INFO: stdout: ""
Nov 12 10:36:32.956: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:36:32.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8902" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:24.019 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":111,"skipped":1706,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:36:32.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-f3b8a59b-4dfd-4d7d-8f6c-170df096f93b
STEP: Creating configMap with name cm-test-opt-upd-f67e2edb-a165-4b6e-8c63-707cf24cf507
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f3b8a59b-4dfd-4d7d-8f6c-170df096f93b
STEP: Updating configmap cm-test-opt-upd-f67e2edb-a165-4b6e-8c63-707cf24cf507
STEP: Creating configMap with name cm-test-opt-create-8a31b5bb-152b-4b1d-bd25-2bb57914f0cb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:36:49.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1099" for this suite.

• [SLOW TEST:16.102 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1708,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:36:49.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:36:49.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Nov 12 10:36:49.201: INFO: stderr: ""
Nov 12 10:36:49.201: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.13\", GitCommit:\"30d651da517185653e34e7ab99a792be6a3d9495\", GitTreeState:\"clean\", BuildDate:\"2020-10-15T01:06:31Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.7\", GitCommit:\"be3d344ed06bff7a4fc60656200a93c74f31f9a4\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T19:24:46Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:36:49.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7975" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":113,"skipped":1716,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:36:49.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:36:49.232: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2ab7217c-fb3d-4ecc-ae80-13bdc19023f2", Controller:(*bool)(0xc00350ac3a), BlockOwnerDeletion:(*bool)(0xc00350ac3b)}}
Nov 12 10:36:49.234: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"90c41415-2d06-40b5-89f3-eddd8e29e027", Controller:(*bool)(0xc003ebdee6), BlockOwnerDeletion:(*bool)(0xc003ebdee7)}}
Nov 12 10:36:49.236: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c176352b-df88-42b6-bfa4-a62cb7bbd23d", Controller:(*bool)(0xc0035af326), BlockOwnerDeletion:(*bool)(0xc0035af327)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:36:54.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3087" for this suite.

• [SLOW TEST:5.038 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":114,"skipped":1718,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:36:54.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Nov 12 10:36:54.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4523'
Nov 12 10:36:54.501: INFO: stderr: ""
Nov 12 10:36:54.501: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 10:36:54.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:36:54.641: INFO: stderr: ""
Nov 12 10:36:54.641: INFO: stdout: "update-demo-nautilus-bpb46 update-demo-nautilus-m4lrt "
Nov 12 10:36:54.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bpb46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:36:54.777: INFO: stderr: ""
Nov 12 10:36:54.777: INFO: stdout: ""
Nov 12 10:36:54.777: INFO: update-demo-nautilus-bpb46 is created but not running
Nov 12 10:36:59.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:36:59.910: INFO: stderr: ""
Nov 12 10:36:59.910: INFO: stdout: "update-demo-nautilus-bpb46 update-demo-nautilus-m4lrt "
Nov 12 10:36:59.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bpb46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:00.031: INFO: stderr: ""
Nov 12 10:37:00.031: INFO: stdout: ""
Nov 12 10:37:00.031: INFO: update-demo-nautilus-bpb46 is created but not running
Nov 12 10:37:05.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:37:05.180: INFO: stderr: ""
Nov 12 10:37:05.180: INFO: stdout: "update-demo-nautilus-bpb46 update-demo-nautilus-m4lrt "
Nov 12 10:37:05.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bpb46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:05.312: INFO: stderr: ""
Nov 12 10:37:05.312: INFO: stdout: ""
Nov 12 10:37:05.312: INFO: update-demo-nautilus-bpb46 is created but not running
Nov 12 10:37:10.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:37:10.480: INFO: stderr: ""
Nov 12 10:37:10.480: INFO: stdout: "update-demo-nautilus-bpb46 update-demo-nautilus-m4lrt "
Nov 12 10:37:10.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bpb46 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:10.616: INFO: stderr: ""
Nov 12 10:37:10.616: INFO: stdout: "true"
Nov 12 10:37:10.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bpb46 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:10.784: INFO: stderr: ""
Nov 12 10:37:10.784: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 10:37:10.784: INFO: validating pod update-demo-nautilus-bpb46
Nov 12 10:37:10.791: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 10:37:10.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 10:37:10.791: INFO: update-demo-nautilus-bpb46 is verified up and running
Nov 12 10:37:10.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4lrt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:10.949: INFO: stderr: ""
Nov 12 10:37:10.949: INFO: stdout: "true"
Nov 12 10:37:10.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m4lrt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:11.092: INFO: stderr: ""
Nov 12 10:37:11.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 10:37:11.092: INFO: validating pod update-demo-nautilus-m4lrt
Nov 12 10:37:11.096: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 10:37:11.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 10:37:11.096: INFO: update-demo-nautilus-m4lrt is verified up and running
STEP: rolling-update to new replication controller
Nov 12 10:37:11.101: INFO: scanned /root for discovery docs: 
Nov 12 10:37:11.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4523'
Nov 12 10:37:47.692: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Nov 12 10:37:47.692: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 10:37:47.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:37:47.826: INFO: stderr: ""
Nov 12 10:37:47.826: INFO: stdout: "update-demo-kitten-6cx2b update-demo-kitten-88566 update-demo-nautilus-m4lrt "
STEP: Replicas for name=update-demo: expected=2 actual=3
Nov 12 10:37:52.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4523'
Nov 12 10:37:52.985: INFO: stderr: ""
Nov 12 10:37:52.985: INFO: stdout: "update-demo-kitten-6cx2b update-demo-kitten-88566 "
Nov 12 10:37:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6cx2b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:53.143: INFO: stderr: ""
Nov 12 10:37:53.143: INFO: stdout: "true"
Nov 12 10:37:53.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6cx2b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:53.297: INFO: stderr: ""
Nov 12 10:37:53.297: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Nov 12 10:37:53.297: INFO: validating pod update-demo-kitten-6cx2b
Nov 12 10:37:53.304: INFO: got data: {
  "image": "kitten.jpg"
}

Nov 12 10:37:53.309: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Nov 12 10:37:53.309: INFO: update-demo-kitten-6cx2b is verified up and running
Nov 12 10:37:53.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88566 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:53.456: INFO: stderr: ""
Nov 12 10:37:53.456: INFO: stdout: "true"
Nov 12 10:37:53.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-88566 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4523'
Nov 12 10:37:53.611: INFO: stderr: ""
Nov 12 10:37:53.611: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Nov 12 10:37:53.611: INFO: validating pod update-demo-kitten-88566
Nov 12 10:37:53.620: INFO: got data: {
  "image": "kitten.jpg"
}

Nov 12 10:37:53.620: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Nov 12 10:37:53.620: INFO: update-demo-kitten-88566 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:37:53.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4523" for this suite.

• [SLOW TEST:59.387 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":115,"skipped":1727,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:37:53.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Nov 12 10:37:53.658: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15612 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 12 10:37:53.658: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15613 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Nov 12 10:37:53.658: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15614 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Nov 12 10:38:03.674: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15679 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 12 10:38:03.675: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15680 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Nov 12 10:38:03.675: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-4416 /api/v1/namespaces/watch-4416/configmaps/e2e-watch-test-label-changed 9062a09a-7fde-4f86-8e46-4536bcdc880a 15681 0 2020-11-12 10:37:53 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:38:03.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4416" for this suite.

• [SLOW TEST:10.048 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":116,"skipped":1750,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:38:03.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:38:03.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7653'
Nov 12 10:38:03.858: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 12 10:38:03.858: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Nov 12 10:38:03.871: INFO: scanned /root for discovery docs: 
Nov 12 10:38:03.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7653'
Nov 12 10:38:25.889: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Nov 12 10:38:25.889: INFO: stdout: "Created e2e-test-httpd-rc-1c16a358156a4175790954472095cb90\nScaling up e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Nov 12 10:38:25.889: INFO: stdout: "Created e2e-test-httpd-rc-1c16a358156a4175790954472095cb90\nScaling up e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-1c16a358156a4175790954472095cb90 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Nov 12 10:38:25.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7653'
Nov 12 10:38:26.044: INFO: stderr: ""
Nov 12 10:38:26.044: INFO: stdout: "e2e-test-httpd-rc-1c16a358156a4175790954472095cb90-jkkc9 e2e-test-httpd-rc-5jczj "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Nov 12 10:38:31.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7653'
Nov 12 10:38:31.155: INFO: stderr: ""
Nov 12 10:38:31.155: INFO: stdout: "e2e-test-httpd-rc-1c16a358156a4175790954472095cb90-jkkc9 "
Nov 12 10:38:31.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-1c16a358156a4175790954472095cb90-jkkc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7653'
Nov 12 10:38:31.265: INFO: stderr: ""
Nov 12 10:38:31.265: INFO: stdout: "true"
Nov 12 10:38:31.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-1c16a358156a4175790954472095cb90-jkkc9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7653'
Nov 12 10:38:31.373: INFO: stderr: ""
Nov 12 10:38:31.373: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Nov 12 10:38:31.374: INFO: e2e-test-httpd-rc-1c16a358156a4175790954472095cb90-jkkc9 is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Nov 12 10:38:31.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7653'
Nov 12 10:38:31.483: INFO: stderr: ""
Nov 12 10:38:31.483: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:38:31.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7653" for this suite.

• [SLOW TEST:27.812 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":117,"skipped":1755,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:38:31.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:38:32.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:38:34.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:38:36.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:38:38.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:38:40.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774312, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:38:43.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:38:43.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4250" for this suite.
STEP: Destroying namespace "webhook-4250-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.733 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":118,"skipped":1755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:38:43.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:38:54.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8669" for this suite.

• [SLOW TEST:11.040 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":119,"skipped":1782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:38:54.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:39:05.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2075" for this suite.

• [SLOW TEST:11.041 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":120,"skipped":1807,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:39:05.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:39:05.911: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:39:07.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:09.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:11.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:13.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:15.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774345, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:39:18.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:39:18.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5642" for this suite.
STEP: Destroying namespace "webhook-5642-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.671 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":121,"skipped":1814,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:39:18.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Nov 12 10:39:19.000: INFO: Waiting up to 5m0s for pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2" in namespace "emptydir-1533" to be "success or failure"
Nov 12 10:39:19.001: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.473209ms
Nov 12 10:39:21.004: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004115739s
Nov 12 10:39:23.006: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006504978s
Nov 12 10:39:25.009: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009534758s
Nov 12 10:39:27.013: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013099071s
Nov 12 10:39:29.017: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017598926s
STEP: Saw pod success
Nov 12 10:39:29.017: INFO: Pod "pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2" satisfied condition "success or failure"
Nov 12 10:39:29.019: INFO: Trying to get logs from node node4 pod pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2 container test-container: 
STEP: delete the pod
Nov 12 10:39:29.039: INFO: Waiting for pod pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2 to disappear
Nov 12 10:39:29.040: INFO: Pod pod-917f6b35-b5a5-4d55-83e6-25db1e621fa2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:39:29.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1533" for this suite.

• [SLOW TEST:10.066 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1833,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:39:29.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Nov 12 10:39:29.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:39:42.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2168" for this suite.

• [SLOW TEST:13.801 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":123,"skipped":1861,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:39:42.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:39:43.998: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:39:46.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774384, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:48.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774384, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:50.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774384, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:52.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774384, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:39:54.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774384, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774383, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:39:57.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Nov 12 10:40:07.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5974 to-be-attached-pod -i -c=container1'
Nov 12 10:40:07.203: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:07.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5974" for this suite.
STEP: Destroying namespace "webhook-5974-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.392 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":124,"skipped":1863,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:07.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl label
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276
STEP: creating the pod
Nov 12 10:40:07.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7918'
Nov 12 10:40:07.505: INFO: stderr: ""
Nov 12 10:40:07.505: INFO: stdout: "pod/pause created\n"
Nov 12 10:40:07.505: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Nov 12 10:40:07.505: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7918" to be "running and ready"
Nov 12 10:40:07.507: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 1.781435ms
Nov 12 10:40:09.510: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005010978s
Nov 12 10:40:11.512: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007357114s
Nov 12 10:40:13.515: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010158192s
Nov 12 10:40:15.518: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013008267s
Nov 12 10:40:17.520: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.015422452s
Nov 12 10:40:17.520: INFO: Pod "pause" satisfied condition "running and ready"
Nov 12 10:40:17.520: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Nov 12 10:40:17.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7918'
Nov 12 10:40:17.631: INFO: stderr: ""
Nov 12 10:40:17.631: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Nov 12 10:40:17.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7918'
Nov 12 10:40:17.757: INFO: stderr: ""
Nov 12 10:40:17.757: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Nov 12 10:40:17.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7918'
Nov 12 10:40:17.898: INFO: stderr: ""
Nov 12 10:40:17.898: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Nov 12 10:40:17.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7918'
Nov 12 10:40:18.033: INFO: stderr: ""
Nov 12 10:40:18.033: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] Kubectl label
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283
STEP: using delete to clean up resources
Nov 12 10:40:18.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7918'
Nov 12 10:40:18.173: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 10:40:18.173: INFO: stdout: "pod \"pause\" force deleted\n"
Nov 12 10:40:18.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7918'
Nov 12 10:40:18.330: INFO: stderr: "No resources found in kubectl-7918 namespace.\n"
Nov 12 10:40:18.330: INFO: stdout: ""
Nov 12 10:40:18.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7918 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 12 10:40:18.474: INFO: stderr: ""
Nov 12 10:40:18.474: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:18.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7918" for this suite.

• [SLOW TEST:11.238 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":125,"skipped":1886,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:18.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:40:18.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6807'
Nov 12 10:40:18.624: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 12 10:40:18.624: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Nov 12 10:40:18.633: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-z94hv]
Nov 12 10:40:18.633: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-z94hv" in namespace "kubectl-6807" to be "running and ready"
Nov 12 10:40:18.637: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.302103ms
Nov 12 10:40:20.641: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007867633s
Nov 12 10:40:22.644: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01087647s
Nov 12 10:40:24.646: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013735266s
Nov 12 10:40:26.649: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016782014s
Nov 12 10:40:28.652: INFO: Pod "e2e-test-httpd-rc-z94hv": Phase="Running", Reason="", readiness=true. Elapsed: 10.019393372s
Nov 12 10:40:28.652: INFO: Pod "e2e-test-httpd-rc-z94hv" satisfied condition "running and ready"
Nov 12 10:40:28.652: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-z94hv]
Nov 12 10:40:28.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6807'
Nov 12 10:40:28.828: INFO: stderr: ""
Nov 12 10:40:28.828: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.61. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.61. Set the 'ServerName' directive globally to suppress this message\n[Thu Nov 12 10:40:27.737355 2020] [mpm_event:notice] [pid 1:tid 139903685012328] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Nov 12 10:40:27.737398 2020] [core:notice] [pid 1:tid 139903685012328] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Nov 12 10:40:28.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6807'
Nov 12 10:40:28.985: INFO: stderr: ""
Nov 12 10:40:28.985: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:28.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6807" for this suite.

• [SLOW TEST:10.518 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
    should create an rc from an image [Deprecated] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":126,"skipped":1889,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:28.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:40:29.019: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Nov 12 10:40:31.037: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:31.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5114" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":127,"skipped":1892,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:31.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Nov 12 10:40:41.592: INFO: Successfully updated pod "annotationupdatef1b0e969-7fa5-4651-ba4f-2be277982761"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:45.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3121" for this suite.

• [SLOW TEST:14.574 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1898,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:45.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-ffc54471-fe7c-43be-bba2-3c0f840aec8e
STEP: Creating configMap with name cm-test-opt-upd-04c1b871-a5b9-4669-b3ae-842f37d7a425
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ffc54471-fe7c-43be-bba2-3c0f840aec8e
STEP: Updating configmap cm-test-opt-upd-04c1b871-a5b9-4669-b3ae-842f37d7a425
STEP: Creating configMap with name cm-test-opt-create-bb5c04ae-9d29-4522-8eae-2bec425c3e06
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:40:59.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4148" for this suite.

• [SLOW TEST:14.090 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":1908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:40:59.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-7b796fb5-ffa8-4c42-a1da-7aea5e575096
STEP: Creating a pod to test consume secrets
Nov 12 10:40:59.733: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f" in namespace "projected-7361" to be "success or failure"
Nov 12 10:40:59.735: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.001984ms
Nov 12 10:41:01.738: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005030708s
Nov 12 10:41:03.741: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007748265s
Nov 12 10:41:05.744: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010809392s
Nov 12 10:41:07.746: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013498102s
Nov 12 10:41:09.749: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016179601s
STEP: Saw pod success
Nov 12 10:41:09.749: INFO: Pod "pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f" satisfied condition "success or failure"
Nov 12 10:41:09.751: INFO: Trying to get logs from node node4 pod pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f container projected-secret-volume-test: 
STEP: delete the pod
Nov 12 10:41:09.761: INFO: Waiting for pod pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f to disappear
Nov 12 10:41:09.763: INFO: Pod pod-projected-secrets-3666ae07-b27b-4626-b3de-f94280abc03f no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:41:09.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7361" for this suite.

• [SLOW TEST:10.057 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":1949,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:41:09.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-f67540c2-e1c1-43bd-ae4f-3bffbd1516fe
STEP: Creating a pod to test consume configMaps
Nov 12 10:41:09.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b" in namespace "configmap-3308" to be "success or failure"
Nov 12 10:41:09.793: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.636726ms
Nov 12 10:41:11.795: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003890041s
Nov 12 10:41:13.797: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006505319s
Nov 12 10:41:15.800: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009034413s
Nov 12 10:41:17.802: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011520378s
Nov 12 10:41:19.805: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014148733s
STEP: Saw pod success
Nov 12 10:41:19.805: INFO: Pod "pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b" satisfied condition "success or failure"
Nov 12 10:41:19.807: INFO: Trying to get logs from node node3 pod pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b container configmap-volume-test: 
STEP: delete the pod
Nov 12 10:41:19.818: INFO: Waiting for pod pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b to disappear
Nov 12 10:41:19.819: INFO: Pod pod-configmaps-41e5f5ea-4699-4574-bd75-0113b6248e3b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:41:19.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3308" for this suite.

• [SLOW TEST:10.056 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1951,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:41:19.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 12 10:41:29.875: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:41:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-70" for this suite.

• [SLOW TEST:10.063 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":1953,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:41:29.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:41:29.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef" in namespace "projected-2336" to be "success or failure"
Nov 12 10:41:29.912: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 1.534611ms
Nov 12 10:41:31.914: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003979991s
Nov 12 10:41:33.917: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006617754s
Nov 12 10:41:35.920: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009206297s
Nov 12 10:41:37.922: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012047119s
Nov 12 10:41:39.927: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016382622s
STEP: Saw pod success
Nov 12 10:41:39.927: INFO: Pod "downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef" satisfied condition "success or failure"
Nov 12 10:41:39.929: INFO: Trying to get logs from node node3 pod downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef container client-container: 
STEP: delete the pod
Nov 12 10:41:39.941: INFO: Waiting for pod downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef to disappear
Nov 12 10:41:39.943: INFO: Pod downwardapi-volume-c0b46bf7-1c87-4cae-ada1-8cbded36eeef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:41:39.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2336" for this suite.

• [SLOW TEST:10.061 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":1969,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:41:39.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:41:49.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4589" for this suite.

• [SLOW TEST:10.050 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":134,"skipped":2001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:41:50.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-7e9e066a-4fe9-483a-9f36-87ef227ce7d7
STEP: Creating a pod to test consume secrets
Nov 12 10:41:50.022: INFO: Waiting up to 5m0s for pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40" in namespace "secrets-7142" to be "success or failure"
Nov 12 10:41:50.024: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 1.654774ms
Nov 12 10:41:52.027: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004234228s
Nov 12 10:41:54.029: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006952686s
Nov 12 10:41:56.032: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009745392s
Nov 12 10:41:58.034: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01182207s
Nov 12 10:42:00.036: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013944309s
Nov 12 10:42:02.039: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.016071287s
STEP: Saw pod success
Nov 12 10:42:02.039: INFO: Pod "pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40" satisfied condition "success or failure"
Nov 12 10:42:02.040: INFO: Trying to get logs from node node2 pod pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40 container secret-env-test: 
STEP: delete the pod
Nov 12 10:42:02.057: INFO: Waiting for pod pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40 to disappear
Nov 12 10:42:02.059: INFO: Pod pod-secrets-d9f29dc8-a172-4776-9140-6226fd28ee40 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:42:02.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7142" for this suite.

• [SLOW TEST:12.061 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2026,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:42:02.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5264
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 12 10:42:02.077: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Nov 12 10:42:40.130: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.56:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:42:40.130: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:42:40.247: INFO: Found all expected endpoints: [netserver-0]
Nov 12 10:42:40.249: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.42:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:42:40.249: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:42:40.349: INFO: Found all expected endpoints: [netserver-1]
Nov 12 10:42:40.350: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.64:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:42:40.350: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:42:40.447: INFO: Found all expected endpoints: [netserver-2]
Nov 12 10:42:40.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.66:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5264 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:42:40.449: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:42:40.547: INFO: Found all expected endpoints: [netserver-3]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:42:40.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5264" for this suite.

• [SLOW TEST:38.489 seconds]
[sig-network] Networking
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:42:40.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-71a6b6ee-f858-4a4f-81b3-af643b06f461
STEP: Creating secret with name secret-projected-all-test-volume-d9895754-882e-4d95-b79d-c95c75e9b327
STEP: Creating a pod to test Check all projections for projected volume plugin
Nov 12 10:42:40.577: INFO: Waiting up to 5m0s for pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5" in namespace "projected-8286" to be "success or failure"
Nov 12 10:42:40.579: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.664342ms
Nov 12 10:42:42.581: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004330972s
Nov 12 10:42:44.584: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006633844s
Nov 12 10:42:46.586: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009289793s
Nov 12 10:42:48.589: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011608865s
Nov 12 10:42:50.592: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014961833s
Nov 12 10:42:52.595: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017408152s
STEP: Saw pod success
Nov 12 10:42:52.595: INFO: Pod "projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5" satisfied condition "success or failure"
Nov 12 10:42:52.597: INFO: Trying to get logs from node node2 pod projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5 container projected-all-volume-test: 
STEP: delete the pod
Nov 12 10:42:52.607: INFO: Waiting for pod projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5 to disappear
Nov 12 10:42:52.608: INFO: Pod projected-volume-eef3492d-fad2-4286-b4bb-241b468c70e5 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:42:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8286" for this suite.

• [SLOW TEST:12.060 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2065,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:42:52.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:42:52.631: INFO: Creating deployment "webserver-deployment"
Nov 12 10:42:52.634: INFO: Waiting for observed generation 1
Nov 12 10:42:54.638: INFO: Waiting for all required pods to come up
Nov 12 10:42:54.641: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Nov 12 10:43:08.647: INFO: Waiting for deployment "webserver-deployment" to complete
Nov 12 10:43:08.651: INFO: Updating deployment "webserver-deployment" with a non-existent image
Nov 12 10:43:08.655: INFO: Updating deployment webserver-deployment
Nov 12 10:43:08.655: INFO: Waiting for observed generation 2
Nov 12 10:43:10.659: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Nov 12 10:43:10.662: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Nov 12 10:43:10.664: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Nov 12 10:43:10.670: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Nov 12 10:43:10.670: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Nov 12 10:43:10.671: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Nov 12 10:43:10.676: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Nov 12 10:43:10.676: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Nov 12 10:43:10.681: INFO: Updating deployment webserver-deployment
Nov 12 10:43:10.681: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Nov 12 10:43:10.684: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Nov 12 10:43:10.685: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Nov 12 10:43:10.689: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-372 /apis/apps/v1/namespaces/deployment-372/deployments/webserver-deployment 41b9c5af-95e0-428e-8f0c-5f389332c66c 17547 3 2020-11-12 10:42:52 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020e8e58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-11-12 10:43:08 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-12 10:43:10 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Nov 12 10:43:10.692: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-372 /apis/apps/v1/namespaces/deployment-372/replicasets/webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 17544 3 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 41b9c5af-95e0-428e-8f0c-5f389332c66c 0xc0020e9bd7 0xc0020e9bd8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020e9ce8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:43:10.692: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Nov 12 10:43:10.692: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-372 /apis/apps/v1/namespaces/deployment-372/replicasets/webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 17541 3 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 41b9c5af-95e0-428e-8f0c-5f389332c66c 0xc0020e9ab7 0xc0020e9ab8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0020e9b18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:43:10.697: INFO: Pod "webserver-deployment-595b5b9587-4k8dv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4k8dv webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-4k8dv 6ddb0c7e-440c-40f9-814d-222e6a2a7dff 17578 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22747 0xc000c22748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.698: INFO: Pod "webserver-deployment-595b5b9587-69rmz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-69rmz webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-69rmz 8e20a82c-18b4-4857-8e7c-68d71a03700d 17576 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22837 0xc000c22838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.698: INFO: Pod "webserver-deployment-595b5b9587-96npx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-96npx webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-96npx b9762863-06da-48c2-9764-41d81250a4b8 17451 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.2.68"
    ],
    "mac": "0a:58:0a:f4:02:44",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22927 0xc000c22928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.15,PodIP:10.244.2.68,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://af7813231bbdcc1ca246a0af98659be7dda9cf870101b93b1ee67fd67f2a7cbf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.698: INFO: Pod "webserver-deployment-595b5b9587-9n747" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9n747 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-9n747 e7b553d4-67f4-4b5e-9229-d594026758e8 17454 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.2.66"
    ],
    "mac": "0a:58:0a:f4:02:42",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22ab0 0xc000c22ab1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.15,PodIP:10.244.2.66,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://74632d437ff5fc7ca547c08193f7ba430b396b03a4e4cb913335272087c10c72,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.698: INFO: Pod "webserver-deployment-595b5b9587-b275r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b275r webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-b275r f39100cd-8e6e-4f76-b1e3-7c7fda4b777c 17580 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22c20 0xc000c22c21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.699: INFO: Pod "webserver-deployment-595b5b9587-bjfzs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjfzs webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-bjfzs 6c926400-37e5-4af6-93ff-6e9e63f8292c 17485 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.69"
    ],
    "mac": "0a:58:0a:f4:04:45",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22d40 0xc000c22d41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.16,PodIP:10.244.4.69,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://47b363e26d1a3a2bea07c30de7764155d4733e7f5ef20ef60dc1e1b66320090d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.699: INFO: Pod "webserver-deployment-595b5b9587-c42fc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c42fc webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-c42fc 5bdbdc21-f64b-4c16-8aca-f7690b9a7802 17573 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22eb0 0xc000c22eb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.699: INFO: Pod "webserver-deployment-595b5b9587-fk2sq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fk2sq webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-fk2sq de8c61e9-8a7e-4896-bfe0-c035920affea 17562 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c22fe0 0xc000c22fe1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.699: INFO: Pod "webserver-deployment-595b5b9587-fsfg8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fsfg8 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-fsfg8 10a9c6e7-c878-46cf-b9bd-db364373d6e7 17575 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23130 0xc000c23131}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.700: INFO: Pod "webserver-deployment-595b5b9587-g6xbm" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-g6xbm webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-g6xbm 752c3598-9f65-4e25-9521-65c4f7cfdf17 17448 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.2.67"
    ],
    "mac": "0a:58:0a:f4:02:43",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23237 0xc000c23238}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.15,PodIP:10.244.2.67,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5cac7cf60bea52d2c086dfa8ae84e6a952220df1693e93bbeabff0bd224c8621,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.700: INFO: Pod "webserver-deployment-595b5b9587-kjkr8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kjkr8 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-kjkr8 f47c1375-fd7b-4db0-b08d-e926d6a6d456 17574 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c233e0 0xc000c233e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.700: INFO: Pod "webserver-deployment-595b5b9587-nnd4p" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nnd4p webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-nnd4p 5e75176f-7937-4e1d-8b82-90c1415aa221 17572 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c234f0 0xc000c234f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.700: INFO: Pod "webserver-deployment-595b5b9587-nt2wp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nt2wp webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-nt2wp 5accbcbf-9576-43d9-a499-f8eb46f4a2e8 17584 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c235d7 0xc000c235d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.700: INFO: Pod "webserver-deployment-595b5b9587-pvp62" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pvp62 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-pvp62 120ad81c-3e8e-4a6b-9b67-81982a329678 17409 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.1.58"
    ],
    "mac": "0a:58:0a:f4:01:3a",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c236f0 0xc000c236f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:10.244.1.58,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7924ccd29cd8610cbdbf73ce017e121565636bc42bc91e61d89663cbf6ddcc98,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.701: INFO: Pod "webserver-deployment-595b5b9587-rvwrz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rvwrz webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-rvwrz f7f62893-5844-482e-9a35-4ec770eed30f 17577 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23880 0xc000c23881}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.701: INFO: Pod "webserver-deployment-595b5b9587-sz4x4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sz4x4 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-sz4x4 4a01ec5c-68c8-4e5d-b266-6cad51d4ec72 17417 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.45"
    ],
    "mac": "0a:58:0a:f4:03:2d",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23967 0xc000c23968}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.14,PodIP:10.244.3.45,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://932b082dddb32937e3f6488b8746ecb403819d5e7f3fa3e796e22c09d0453b70,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.702: INFO: Pod "webserver-deployment-595b5b9587-v7bq7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-v7bq7 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-v7bq7 5d510dc6-0827-4402-bccf-c55884c7f1bf 17556 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23ae0 0xc000c23ae1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.702: INFO: Pod "webserver-deployment-595b5b9587-vmjzr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vmjzr webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-vmjzr 5b517c82-e43e-4ee1-8131-8cce3f7bc435 17414 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.3.46"
    ],
    "mac": "0a:58:0a:f4:03:2e",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23c00 0xc000c23c01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.14,PodIP:10.244.3.46,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d71928fe2cb1a614b3cddad5a7067e252b56e482a6cf06a38638867b3558fec6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.702: INFO: Pod "webserver-deployment-595b5b9587-vscn2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vscn2 webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-vscn2 5b15e704-a6d5-4e77-b68d-868acfdbb48a 17548 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23d70 0xc000c23d71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-595b5b9587-wctrc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wctrc webserver-deployment-595b5b9587- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-595b5b9587-wctrc 6595615a-e489-4073-8fcf-ca938796ef9f 17406 0 2020-11-12 10:42:52 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.1.57"
    ],
    "mac": "0a:58:0a:f4:01:39",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b5ed33d3-a470-4730-82d0-e22d9f188c4a 0xc000c23e90 0xc000c23e91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:10.244.1.57,StartTime:2020-11-12 10:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:43:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://34f4482adf454acfe9b6fc5a59bd34a5f7bcb8db3f41ce4a5dcab9a838dcd138,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-c7997dcc8-bq69f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bq69f webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-bq69f 1646968f-e398-4421-8fee-9476d01b65d0 17520 0 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208000 0xc002208001}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.14,PodIP:,StartTime:2020-11-12 10:43:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-c7997dcc8-clkmd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-clkmd webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-clkmd 61ca4373-b2fa-40e7-a92a-1f1a99d48c5d 17568 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208340 0xc002208341}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-c7997dcc8-ctpvv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ctpvv webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-ctpvv 04ba72ca-5913-4ec2-b601-58f80ecd3ba4 17586 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208487 0xc002208488}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-c7997dcc8-dfkbf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dfkbf webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-dfkbf 16c5987b-80fe-4727-b41d-424fa908a072 17521 0 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208750 0xc002208751}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.15,PodIP:,StartTime:2020-11-12 10:43:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.703: INFO: Pod "webserver-deployment-c7997dcc8-f2b7t" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f2b7t webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-f2b7t d95e7846-d6af-4fdb-bfff-1877f271abbc 17581 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208a10 0xc002208a11}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-fsdq6" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fsdq6 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-fsdq6 5c416a74-24c9-4b43-9303-d58998f88786 17537 0 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208ba7 0xc002208ba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:,StartTime:2020-11-12 10:43:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-fxnzv" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fxnzv webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-fxnzv e4458710-3cdd-42f2-aacc-4007ac6775e5 17588 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002208f50 0xc002208f51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-g6vr7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g6vr7 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-g6vr7 ac0c8a45-345c-4bb3-9220-bb87cead21d8 17569 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002209070 0xc002209071}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-j2jjp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j2jjp webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-j2jjp c1fc66a6-a436-41e5-8472-1fa1d74ef61f 17554 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002209350 0xc002209351}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-lc6h2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lc6h2 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-lc6h2 8f19684b-8ea7-4f5e-8e8a-71dda22e65d9 17564 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002209470 0xc002209471}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-pbw46" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pbw46 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-pbw46 f05f23c1-5703-428f-a47e-190b423fad95 17516 0 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc0022095c0 0xc0022095c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:,StartTime:2020-11-12 10:43:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.704: INFO: Pod "webserver-deployment-c7997dcc8-pctvh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pctvh webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-pctvh f469bef2-8fa2-4303-a585-a1483e19e127 17587 0 2020-11-12 10:43:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002209730 0xc002209731}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Nov 12 10:43:10.705: INFO: Pod "webserver-deployment-c7997dcc8-tvs97" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tvs97 webserver-deployment-c7997dcc8- deployment-372 /api/v1/namespaces/deployment-372/pods/webserver-deployment-c7997dcc8-tvs97 a2fcd745-6aac-41f1-abf4-ab4410aa842c 17536 0 2020-11-12 10:43:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 bb49917a-eff1-42b5-ba74-fb865d4f55a2 0xc002209a60 0xc002209a61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5h2kl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5h2kl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5h2kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:43:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.16,PodIP:,StartTime:2020-11-12 10:43:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:43:10.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-372" for this suite.

• [SLOW TEST:18.094 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":138,"skipped":2132,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:43:10.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Nov 12 10:43:10.727: INFO: Waiting up to 5m0s for pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4" in namespace "var-expansion-2648" to be "success or failure"
Nov 12 10:43:10.728: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 1.415754ms
Nov 12 10:43:12.731: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003896667s
Nov 12 10:43:14.733: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006469029s
Nov 12 10:43:16.736: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008641443s
Nov 12 10:43:18.738: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011068817s
Nov 12 10:43:20.740: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.013420021s
Nov 12 10:43:22.743: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.015705576s
Nov 12 10:43:24.745: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.017876663s
STEP: Saw pod success
Nov 12 10:43:24.745: INFO: Pod "var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4" satisfied condition "success or failure"
Nov 12 10:43:24.748: INFO: Trying to get logs from node node3 pod var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4 container dapi-container: 
STEP: delete the pod
Nov 12 10:43:24.764: INFO: Waiting for pod var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4 to disappear
Nov 12 10:43:24.766: INFO: Pod var-expansion-19f06326-ccf5-466b-b319-1cdafb7fe8c4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:43:24.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2648" for this suite.

• [SLOW TEST:14.061 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2140,"failed":0}
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:43:24.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Nov 12 10:43:24.805: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:24.807: INFO: Number of nodes with available pods: 0
Nov 12 10:43:24.807: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:25.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:25.814: INFO: Number of nodes with available pods: 0
Nov 12 10:43:25.814: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:26.811: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:26.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:26.814: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:27.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:27.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:27.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:28.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:28.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:28.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:29.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:29.812: INFO: Number of nodes with available pods: 0
Nov 12 10:43:29.812: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:30.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:30.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:30.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:31.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:31.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:31.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:32.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:32.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:32.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:33.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:33.813: INFO: Number of nodes with available pods: 0
Nov 12 10:43:33.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:34.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:34.813: INFO: Number of nodes with available pods: 2
Nov 12 10:43:34.813: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:35.811: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:35.814: INFO: Number of nodes with available pods: 3
Nov 12 10:43:35.814: INFO: Node node1 is running more than one daemon pod
Nov 12 10:43:36.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:36.813: INFO: Number of nodes with available pods: 4
Nov 12 10:43:36.813: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Stop a daemon pod, check that the daemon pod is revived.
Nov 12 10:43:36.823: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:36.826: INFO: Number of nodes with available pods: 3
Nov 12 10:43:36.826: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:37.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:37.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:37.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:38.829: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:38.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:38.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:39.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:39.834: INFO: Number of nodes with available pods: 3
Nov 12 10:43:39.834: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:40.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:40.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:40.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:41.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:41.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:41.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:42.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:42.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:42.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:43.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:43.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:43.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:44.833: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:44.838: INFO: Number of nodes with available pods: 3
Nov 12 10:43:44.838: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:45.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:45.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:45.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:46.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:46.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:46.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:47.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:47.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:47.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:48.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:48.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:48.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:49.829: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:49.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:49.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:50.831: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:50.834: INFO: Number of nodes with available pods: 3
Nov 12 10:43:50.834: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:51.829: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:51.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:51.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:52.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:52.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:52.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:53.829: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:53.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:53.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:54.831: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:54.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:54.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:55.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:55.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:55.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:56.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:56.833: INFO: Number of nodes with available pods: 3
Nov 12 10:43:56.833: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:57.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:57.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:57.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:58.830: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:58.832: INFO: Number of nodes with available pods: 3
Nov 12 10:43:58.832: INFO: Node node4 is running more than one daemon pod
Nov 12 10:43:59.831: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:43:59.834: INFO: Number of nodes with available pods: 4
Nov 12 10:43:59.834: INFO: Number of running nodes: 4, number of available pods: 4
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7606, will wait for the garbage collector to delete the pods
Nov 12 10:43:59.892: INFO: Deleting DaemonSet.extensions daemon-set took: 3.885559ms
Nov 12 10:43:59.992: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.260811ms
Nov 12 10:44:09.194: INFO: Number of nodes with available pods: 0
Nov 12 10:44:09.194: INFO: Number of running nodes: 0, number of available pods: 0
Nov 12 10:44:09.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7606/daemonsets","resourceVersion":"18163"},"items":null}

Nov 12 10:44:09.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7606/pods","resourceVersion":"18163"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:09.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7606" for this suite.

• [SLOW TEST:44.451 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":140,"skipped":2140,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:09.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:44:09.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:44:11.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:44:13.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:44:15.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:44:17.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:44:19.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740774649, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:44:22.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:44:22.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3913-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:23.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3789" for this suite.
STEP: Destroying namespace "webhook-3789-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":141,"skipped":2160,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:23.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Nov 12 10:44:23.734: INFO: Waiting up to 5m0s for pod "pod-09af9578-429e-4f3f-8640-68585606d40e" in namespace "emptydir-967" to be "success or failure"
Nov 12 10:44:23.735: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.397783ms
Nov 12 10:44:25.739: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005183326s
Nov 12 10:44:27.742: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008206826s
Nov 12 10:44:29.744: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010180078s
Nov 12 10:44:31.746: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012176159s
Nov 12 10:44:33.748: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014656466s
STEP: Saw pod success
Nov 12 10:44:33.748: INFO: Pod "pod-09af9578-429e-4f3f-8640-68585606d40e" satisfied condition "success or failure"
Nov 12 10:44:33.750: INFO: Trying to get logs from node node2 pod pod-09af9578-429e-4f3f-8640-68585606d40e container test-container: 
STEP: delete the pod
Nov 12 10:44:33.766: INFO: Waiting for pod pod-09af9578-429e-4f3f-8640-68585606d40e to disappear
Nov 12 10:44:33.768: INFO: Pod pod-09af9578-429e-4f3f-8640-68585606d40e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:33.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-967" for this suite.

• [SLOW TEST:10.056 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2168,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:33.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 10:44:33.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1" in namespace "projected-6973" to be "success or failure"
Nov 12 10:44:33.793: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 1.695079ms
Nov 12 10:44:35.799: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006863549s
Nov 12 10:44:37.802: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010085299s
Nov 12 10:44:39.804: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012356276s
Nov 12 10:44:41.807: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014844108s
Nov 12 10:44:43.809: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017353671s
STEP: Saw pod success
Nov 12 10:44:43.809: INFO: Pod "downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1" satisfied condition "success or failure"
Nov 12 10:44:43.811: INFO: Trying to get logs from node node2 pod downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1 container client-container: 
STEP: delete the pod
Nov 12 10:44:43.821: INFO: Waiting for pod downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1 to disappear
Nov 12 10:44:43.822: INFO: Pod downwardapi-volume-2f1e11df-758f-4f79-b77a-e163c625a7a1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:43.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6973" for this suite.

• [SLOW TEST:10.056 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:43.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Nov 12 10:44:43.851: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix342476057/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:43.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3564" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":144,"skipped":2207,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:43.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:44:43.964: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf" in namespace "security-context-test-2430" to be "success or failure"
Nov 12 10:44:43.966: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.609072ms
Nov 12 10:44:45.969: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004118372s
Nov 12 10:44:47.971: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006996313s
Nov 12 10:44:49.974: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009652097s
Nov 12 10:44:51.977: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012195628s
Nov 12 10:44:53.979: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014845926s
Nov 12 10:44:53.979: INFO: Pod "busybox-user-65534-e2e113a3-106f-489c-acf0-81308938b8bf" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:44:53.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2430" for this suite.

• [SLOW TEST:10.039 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2215,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:44:53.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-8452
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8452
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8452
Nov 12 10:44:54.010: INFO: Found 0 stateful pods, waiting for 1
Nov 12 10:45:04.013: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Nov 12 10:45:14.015: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Nov 12 10:45:14.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 10:45:14.357: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 10:45:14.358: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 10:45:14.358: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 10:45:14.360: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Nov 12 10:45:24.363: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 10:45:24.363: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 10:45:24.372: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999496s
Nov 12 10:45:25.374: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996975307s
Nov 12 10:45:26.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.994317499s
Nov 12 10:45:27.380: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.990918964s
Nov 12 10:45:28.383: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.98832131s
Nov 12 10:45:29.386: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.98552935s
Nov 12 10:45:30.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.982854322s
Nov 12 10:45:31.392: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.980017448s
Nov 12 10:45:32.395: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.976879127s
Nov 12 10:45:33.398: INFO: Verifying statefulset ss doesn't scale past 1 for another 973.900585ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8452
Nov 12 10:45:34.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:45:34.660: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 10:45:34.660: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 10:45:34.660: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 10:45:34.662: INFO: Found 1 stateful pods, waiting for 3
Nov 12 10:45:44.665: INFO: Found 2 stateful pods, waiting for 3
Nov 12 10:45:54.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:45:54.666: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:45:54.666: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Nov 12 10:46:04.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:46:04.666: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 10:46:04.666: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Nov 12 10:46:04.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 10:46:04.916: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 10:46:04.916: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 10:46:04.916: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 10:46:04.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 10:46:05.189: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 10:46:05.189: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 10:46:05.189: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 10:46:05.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 10:46:05.440: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 10:46:05.440: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 10:46:05.440: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 10:46:05.440: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 10:46:05.442: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Nov 12 10:46:15.447: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 10:46:15.447: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 10:46:15.447: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 10:46:15.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999511s
Nov 12 10:46:16.457: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997397317s
Nov 12 10:46:17.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.994622986s
Nov 12 10:46:18.464: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.991128937s
Nov 12 10:46:19.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98778115s
Nov 12 10:46:20.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.984753331s
Nov 12 10:46:21.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.981508681s
Nov 12 10:46:22.477: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.977595199s
Nov 12 10:46:23.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.974582026s
Nov 12 10:46:24.483: INFO: Verifying statefulset ss doesn't scale past 3 for another 970.964466ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8452
Nov 12 10:46:25.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:25.740: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 10:46:25.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 10:46:25.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 10:46:25.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:26.006: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 10:46:26.006: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 10:46:26.006: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 10:46:26.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:26.233: INFO: rc: 126
Nov 12 10:46:26.233: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:
OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown

stderr:
command terminated with exit code 126

error:
exit status 126
Nov 12 10:46:36.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:36.375: INFO: rc: 1
Nov 12 10:46:36.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:46:46.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:46.485: INFO: rc: 1
Nov 12 10:46:46.485: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:46:56.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:46:56.646: INFO: rc: 1
Nov 12 10:46:56.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:06.813: INFO: rc: 1
Nov 12 10:47:06.813: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:16.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:16.944: INFO: rc: 1
Nov 12 10:47:16.944: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:26.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:27.076: INFO: rc: 1
Nov 12 10:47:27.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:37.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:37.225: INFO: rc: 1
Nov 12 10:47:37.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:47.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:47.345: INFO: rc: 1
Nov 12 10:47:47.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:47:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:47:57.476: INFO: rc: 1
Nov 12 10:47:57.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:07.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:07.601: INFO: rc: 1
Nov 12 10:48:07.602: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:17.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:17.721: INFO: rc: 1
Nov 12 10:48:17.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:27.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:27.879: INFO: rc: 1
Nov 12 10:48:27.879: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:37.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:38.025: INFO: rc: 1
Nov 12 10:48:38.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:48.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:48.173: INFO: rc: 1
Nov 12 10:48:48.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:48:58.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:48:58.292: INFO: rc: 1
Nov 12 10:48:58.292: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:08.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:08.441: INFO: rc: 1
Nov 12 10:49:08.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:18.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:18.570: INFO: rc: 1
Nov 12 10:49:18.570: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:28.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:28.710: INFO: rc: 1
Nov 12 10:49:28.710: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:38.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:38.852: INFO: rc: 1
Nov 12 10:49:38.852: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:48.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:48.981: INFO: rc: 1
Nov 12 10:49:48.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:49:58.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:49:59.141: INFO: rc: 1
Nov 12 10:49:59.141: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:09.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:50:09.269: INFO: rc: 1
Nov 12 10:50:09.271: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:19.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:50:19.433: INFO: rc: 1
Nov 12 10:50:19.433: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:29.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:50:29.575: INFO: rc: 1
Nov 12 10:50:29.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:39.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:50:39.725: INFO: rc: 1
Nov 12 10:50:39.725: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:49.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:50:49.864: INFO: rc: 1
Nov 12 10:50:49.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:50:59.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:51:00.016: INFO: rc: 1
Nov 12 10:51:00.016: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:51:10.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:51:10.160: INFO: rc: 1
Nov 12 10:51:10.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:51:20.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:51:20.298: INFO: rc: 1
Nov 12 10:51:20.298: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Nov 12 10:51:30.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 10:51:30.449: INFO: rc: 1
Nov 12 10:51:30.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Nov 12 10:51:30.449: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 10:51:30.465: INFO: Deleting all statefulset in ns statefulset-8452
Nov 12 10:51:30.468: INFO: Scaling statefulset ss to 0
Nov 12 10:51:30.475: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 10:51:30.477: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:51:30.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8452" for this suite.

• [SLOW TEST:396.501 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":146,"skipped":2220,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:51:30.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Nov 12 10:51:30.506: INFO: Waiting up to 5m0s for pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868" in namespace "emptydir-7624" to be "success or failure"
Nov 12 10:51:30.508: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 1.559797ms
Nov 12 10:51:32.510: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003880017s
Nov 12 10:51:34.513: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006170744s
Nov 12 10:51:36.515: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008740094s
Nov 12 10:51:38.518: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011869196s
Nov 12 10:51:40.522: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015906027s
Nov 12 10:51:42.525: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Running", Reason="", readiness=true. Elapsed: 12.018278762s
Nov 12 10:51:44.528: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.02109458s
STEP: Saw pod success
Nov 12 10:51:44.528: INFO: Pod "pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868" satisfied condition "success or failure"
Nov 12 10:51:44.530: INFO: Trying to get logs from node node2 pod pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868 container test-container: 
STEP: delete the pod
Nov 12 10:51:44.547: INFO: Waiting for pod pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868 to disappear
Nov 12 10:51:44.548: INFO: Pod pod-4a92ac4f-95c3-4c4a-a6e8-88b717697868 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:51:44.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7624" for this suite.

• [SLOW TEST:14.066 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2224,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:51:44.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 12 10:51:55.604: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:51:55.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-32" for this suite.

• [SLOW TEST:11.062 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2249,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:51:55.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7471
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7471
STEP: creating replication controller externalsvc in namespace services-7471
I1112 10:51:55.644197      10 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7471, replica count: 2
I1112 10:51:58.694776      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:52:01.695103      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:52:04.695396      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:52:07.695703      10 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Nov 12 10:52:07.703: INFO: Creating new exec pod
Nov 12 10:52:17.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7471 execpod5hfhm -- /bin/sh -x -c nslookup clusterip-service'
Nov 12 10:52:17.948: INFO: stderr: "+ nslookup clusterip-service\n"
Nov 12 10:52:17.948: INFO: stdout: "Server:\t\t169.254.25.10\nAddress:\t169.254.25.10#53\n\nclusterip-service.services-7471.svc.cluster.local\tcanonical name = externalsvc.services-7471.svc.cluster.local.\nName:\texternalsvc.services-7471.svc.cluster.local\nAddress: 10.233.54.63\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7471, will wait for the garbage collector to delete the pods
Nov 12 10:52:18.008: INFO: Deleting ReplicationController externalsvc took: 3.571446ms
Nov 12 10:52:18.308: INFO: Terminating ReplicationController externalsvc pods took: 300.262062ms
Nov 12 10:52:28.815: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:52:28.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7471" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:33.207 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":149,"skipped":2321,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:52:28.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:52:28.839: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:52:29.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3170" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":150,"skipped":2336,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:52:29.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:52:39.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6751" for this suite.

• [SLOW TEST:10.045 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2353,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:52:39.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-71dd45e3-73f5-4b27-839b-b5732099aad0
STEP: Creating a pod to test consume configMaps
Nov 12 10:52:39.437: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e" in namespace "projected-9353" to be "success or failure"
Nov 12 10:52:39.438: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.392376ms
Nov 12 10:52:41.441: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003954286s
Nov 12 10:52:43.443: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006640004s
Nov 12 10:52:45.446: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009182451s
Nov 12 10:52:47.448: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011509058s
Nov 12 10:52:49.451: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014408s
STEP: Saw pod success
Nov 12 10:52:49.451: INFO: Pod "pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e" satisfied condition "success or failure"
Nov 12 10:52:49.453: INFO: Trying to get logs from node node4 pod pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 10:52:49.470: INFO: Waiting for pod pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e to disappear
Nov 12 10:52:49.471: INFO: Pod pod-projected-configmaps-0d545e19-c578-4614-9a0f-a0249a08cd1e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:52:49.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9353" for this suite.

• [SLOW TEST:10.061 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:52:49.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:52:49.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Nov 12 10:52:51.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3056 create -f -'
Nov 12 10:52:51.813: INFO: stderr: ""
Nov 12 10:52:51.813: INFO: stdout: "e2e-test-crd-publish-openapi-7802-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Nov 12 10:52:51.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3056 delete e2e-test-crd-publish-openapi-7802-crds test-cr'
Nov 12 10:52:51.977: INFO: stderr: ""
Nov 12 10:52:51.977: INFO: stdout: "e2e-test-crd-publish-openapi-7802-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Nov 12 10:52:51.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3056 apply -f -'
Nov 12 10:52:52.217: INFO: stderr: ""
Nov 12 10:52:52.217: INFO: stdout: "e2e-test-crd-publish-openapi-7802-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Nov 12 10:52:52.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3056 delete e2e-test-crd-publish-openapi-7802-crds test-cr'
Nov 12 10:52:52.352: INFO: stderr: ""
Nov 12 10:52:52.352: INFO: stdout: "e2e-test-crd-publish-openapi-7802-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Nov 12 10:52:52.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7802-crds'
Nov 12 10:52:52.576: INFO: stderr: ""
Nov 12 10:52:52.576: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7802-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:52:55.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3056" for this suite.

• [SLOW TEST:6.001 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":153,"skipped":2388,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:52:55.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Nov 12 10:53:15.522: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:15.524: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:17.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:17.526: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:19.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:19.526: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:21.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:21.527: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:23.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:23.526: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:25.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:25.527: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:27.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:27.527: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 12 10:53:29.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 12 10:53:29.527: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:53:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9886" for this suite.

• [SLOW TEST:34.056 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2388,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:53:29.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Nov 12 10:53:39.566: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-706 PodName:pod-sharedvolume-ad541cf5-3000-404e-adf4-915c8ee1babe ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:53:39.566: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:53:39.682: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:53:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-706" for this suite.

• [SLOW TEST:10.152 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":155,"skipped":2407,"failed":0}
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:53:39.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4811.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4811.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4811.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4811.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.21.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.21.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.21.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.21.141_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4811.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4811.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4811.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4811.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4811.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4811.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.21.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.21.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.21.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.21.141_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 10:53:51.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.728: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.745: INFO: Unable to read jessie_udp@dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.747: INFO: Unable to read jessie_tcp@dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.750: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local from pod dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d: the server could not find the requested resource (get pods dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d)
Nov 12 10:53:51.766: INFO: Lookups using dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d failed for: [wheezy_udp@dns-test-service.dns-4811.svc.cluster.local wheezy_tcp@dns-test-service.dns-4811.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local jessie_udp@dns-test-service.dns-4811.svc.cluster.local jessie_tcp@dns-test-service.dns-4811.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4811.svc.cluster.local]

Nov 12 10:53:56.814: INFO: DNS probes using dns-4811/dns-test-475606a4-58cd-4f24-a6ca-9cffbc2e001d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:53:56.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4811" for this suite.

• [SLOW TEST:17.149 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":156,"skipped":2407,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:53:56.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-83a1693e-3a7e-414b-9a70-8f2e3f61d7c5
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:54:06.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-644" for this suite.

• [SLOW TEST:10.051 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:54:06.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-2526
STEP: creating replication controller nodeport-test in namespace services-2526
I1112 10:54:06.909967      10 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2526, replica count: 2
I1112 10:54:09.960534      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:54:12.960834      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:54:15.961078      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:54:18.961318      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 10:54:21.961959      10 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 12 10:54:21.962: INFO: Creating new exec pod
Nov 12 10:54:34.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2526 execpodx49rj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Nov 12 10:54:35.225: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Nov 12 10:54:35.225: INFO: stdout: ""
Nov 12 10:54:35.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2526 execpodx49rj -- /bin/sh -x -c nc -zv -t -w 2 10.233.18.215 80'
Nov 12 10:54:35.477: INFO: stderr: "+ nc -zv -t -w 2 10.233.18.215 80\nConnection to 10.233.18.215 80 port [tcp/http] succeeded!\n"
Nov 12 10:54:35.477: INFO: stdout: ""
Nov 12 10:54:35.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2526 execpodx49rj -- /bin/sh -x -c nc -zv -t -w 2 10.0.20.13 31781'
Nov 12 10:54:35.725: INFO: stderr: "+ nc -zv -t -w 2 10.0.20.13 31781\nConnection to 10.0.20.13 31781 port [tcp/31781] succeeded!\n"
Nov 12 10:54:35.725: INFO: stdout: ""
Nov 12 10:54:35.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2526 execpodx49rj -- /bin/sh -x -c nc -zv -t -w 2 10.0.20.16 31781'
Nov 12 10:54:35.969: INFO: stderr: "+ nc -zv -t -w 2 10.0.20.16 31781\nConnection to 10.0.20.16 31781 port [tcp/31781] succeeded!\n"
Nov 12 10:54:35.969: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:54:35.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2526" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:29.094 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":158,"skipped":2471,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:54:35.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:54:49.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7468" for this suite.

• [SLOW TEST:13.055 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":159,"skipped":2472,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:54:49.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-0521a7c2-49fa-4691-8eac-9bcd81a2fc77
STEP: Creating a pod to test consume configMaps
Nov 12 10:54:49.058: INFO: Waiting up to 5m0s for pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f" in namespace "configmap-8025" to be "success or failure"
Nov 12 10:54:49.060: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.816399ms
Nov 12 10:54:51.062: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00423315s
Nov 12 10:54:53.065: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006805282s
Nov 12 10:54:55.069: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01066166s
Nov 12 10:54:57.072: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.014488145s
Nov 12 10:54:59.075: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.016584579s
Nov 12 10:55:01.077: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.019199592s
STEP: Saw pod success
Nov 12 10:55:01.077: INFO: Pod "pod-configmaps-2740b230-52ba-406a-8693-18935206f00f" satisfied condition "success or failure"
Nov 12 10:55:01.079: INFO: Trying to get logs from node node1 pod pod-configmaps-2740b230-52ba-406a-8693-18935206f00f container configmap-volume-test: 
STEP: delete the pod
Nov 12 10:55:01.099: INFO: Waiting for pod pod-configmaps-2740b230-52ba-406a-8693-18935206f00f to disappear
Nov 12 10:55:01.100: INFO: Pod pod-configmaps-2740b230-52ba-406a-8693-18935206f00f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:55:01.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8025" for this suite.

• [SLOW TEST:12.068 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2475,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:55:01.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:55:02.412: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:55:04.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:06.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:08.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:10.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:12.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775302, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:55:15.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:55:27.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2814" for this suite.
STEP: Destroying namespace "webhook-2814-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.419 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":161,"skipped":2491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:55:27.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:55:27.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3540'
Nov 12 10:55:27.824: INFO: stderr: ""
Nov 12 10:55:27.824: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Nov 12 10:55:27.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3540'
Nov 12 10:55:28.063: INFO: stderr: ""
Nov 12 10:55:28.063: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Nov 12 10:55:29.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:29.066: INFO: Found 0 / 1
Nov 12 10:55:30.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:30.066: INFO: Found 0 / 1
Nov 12 10:55:31.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:31.066: INFO: Found 0 / 1
Nov 12 10:55:32.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:32.066: INFO: Found 0 / 1
Nov 12 10:55:33.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:33.066: INFO: Found 0 / 1
Nov 12 10:55:34.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:34.066: INFO: Found 0 / 1
Nov 12 10:55:35.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:35.066: INFO: Found 0 / 1
Nov 12 10:55:36.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:36.066: INFO: Found 0 / 1
Nov 12 10:55:37.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:37.066: INFO: Found 0 / 1
Nov 12 10:55:38.066: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:38.067: INFO: Found 1 / 1
Nov 12 10:55:38.067: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Nov 12 10:55:38.068: INFO: Selector matched 1 pods for map[app:agnhost]
Nov 12 10:55:38.068: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Nov 12 10:55:38.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-j4ctb --namespace=kubectl-3540'
Nov 12 10:55:38.258: INFO: stderr: ""
Nov 12 10:55:38.258: INFO: stdout: "Name:         agnhost-master-j4ctb\nNamespace:    kubectl-3540\nPriority:     0\nNode:         node4/10.0.20.16\nStart Time:   Thu, 12 Nov 2020 10:55:27 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  k8s.v1.cni.cncf.io/networks-status:\n                [{\n                    \"name\": \"default-cni-network\",\n                    \"interface\": \"eth0\",\n                    \"ips\": [\n                        \"10.244.4.87\"\n                    ],\n                    \"mac\": \"0a:58:0a:f4:04:57\",\n                    \"default\": true,\n                    \"dns\": {}\n                }]\nStatus:       Running\nIP:           10.244.4.87\nIPs:\n  IP:           10.244.4.87\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://8c12e8d53dcbcfb14448d84e92331af32023768eb54ef8cfdf54484015d80597\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 12 Nov 2020 10:55:36 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hpq7t (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-hpq7t:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-hpq7t\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  10s   default-scheduler  Successfully assigned kubectl-3540/agnhost-master-j4ctb to node4\n  Normal  Pulled     2s    kubelet, node4     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s    kubelet, node4     Created container agnhost-master\n  Normal  Started    2s    kubelet, node4     Started container agnhost-master\n"
Nov 12 10:55:38.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3540'
Nov 12 10:55:38.464: INFO: stderr: ""
Nov 12 10:55:38.464: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3540\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: agnhost-master-j4ctb\n"
Nov 12 10:55:38.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3540'
Nov 12 10:55:38.632: INFO: stderr: ""
Nov 12 10:55:38.632: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3540\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.233.47.56\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.4.87:6379\nSession Affinity:  None\nEvents:            \n"
Nov 12 10:55:38.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node master1'
Nov 12 10:55:38.792: INFO: stderr: ""
Nov 12 10:55:38.792: INFO: stdout: "Name:               master1\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=master1\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"9a:aa:5e:53:0b:da\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 10.0.20.12\n                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 12 Nov 2020 09:42:29 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  master1\n  AcquireTime:     \n  RenewTime:       Thu, 12 Nov 2020 10:55:33 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 12 Nov 2020 10:55:37 +0000   Thu, 12 Nov 2020 09:42:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 12 Nov 2020 10:55:37 +0000   Thu, 12 Nov 2020 09:42:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 12 Nov 2020 10:55:37 +0000   Thu, 12 Nov 2020 09:42:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 12 Nov 2020 10:55:37 +0000   Thu, 12 Nov 2020 09:45:39 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  10.0.20.12\n  Hostname:    master1\nCapacity:\n  cpu:                48\n  ephemeral-storage:  1099913252Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131731220Ki\n  pods:               110\nAllocatable:\n  cpu:                47800m\n  ephemeral-storage:  1013680051365\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131128820Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 a708fde55ac0403480cb0487d9f2f29e\n  System UUID:                3856384D-4139-5355-4535-343430424631\n  Boot ID:                    8ce6f314-be3a-48a1-bc5f-9e4e929a848a\n  Kernel Version:             3.10.0-1127.19.1.el7.x86_64\n  OS Image:                   CentOS Linux 7 (Core)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.16.7\n  Kube-Proxy Version:         v1.16.7\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-58687784f9-2j7x8                 100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     69m\n  kube-system                 dns-autoscaler-79599df498-wj5fk          20m (0%)      0 (0%)      10Mi (0%)        0 (0%)         69m\n  kube-system                 kube-apiserver-master1                   250m (0%)     0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kube-controller-manager-master1          200m (0%)     0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kube-flannel-4vqxc                       150m (0%)     300m (0%)   64M (0%)         500M (0%)      70m\n  kube-system                 kube-multus-ds-amd64-g8x98               100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      69m\n  kube-system                 kube-proxy-pgsmw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         70m\n  kube-system                 kube-scheduler-master1                   100m (0%)     0 (0%)      0 (0%)           0 (0%)         71m\n  kube-system                 kubernetes-dashboard-556b9ff8f8-8nxlf    50m (0%)      100m (0%)   64M (0%)         256M (0%)      69m\n  kube-system                 nodelocaldns-99fvk                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     69m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests       Limits\n  --------           --------       ------\n  cpu                1070m (2%)     500m (1%)\n  memory             329800Ki (0%)  1164944640 (0%)\n  ephemeral-storage  0 (0%)         0 (0%)\nEvents:              \n"
Nov 12 10:55:38.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3540'
Nov 12 10:55:38.933: INFO: stderr: ""
Nov 12 10:55:38.933: INFO: stdout: "Name:         kubectl-3540\nLabels:       e2e-framework=kubectl\n              e2e-run=4defbc17-803f-431b-b2bd-bd97a166bd5f\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:55:38.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3540" for this suite.

• [SLOW TEST:11.413 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1048
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":162,"skipped":2553,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:55:38.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 10:55:39.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 10:55:41.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:43.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:45.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:47.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:55:49.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775339, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 10:55:52.525: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:55:52.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-331" for this suite.
STEP: Destroying namespace "webhook-331-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.634 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":163,"skipped":2562,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:55:52.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Nov 12 10:55:52.595: INFO: Created pod &Pod{ObjectMeta:{dns-5618  dns-5618 /api/v1/namespaces/dns-5618/pods/dns-5618 a23d5a42-aa43-43e4-aa1d-dbd87d66cbb1 21600 0 2020-11-12 10:55:52 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fkvx5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fkvx5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fkvx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Nov 12 10:56:04.599: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5618 PodName:dns-5618 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:56:04.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized DNS server is configured on pod...
Nov 12 10:56:04.724: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5618 PodName:dns-5618 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 10:56:04.724: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 10:56:04.832: INFO: Deleting pod dns-5618...
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:56:04.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5618" for this suite.

• [SLOW TEST:12.265 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":164,"skipped":2590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:56:04.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Nov 12 10:56:04.861: INFO: Waiting up to 5m0s for pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701" in namespace "emptydir-5657" to be "success or failure"
Nov 12 10:56:04.863: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03666ms
Nov 12 10:56:06.865: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004434392s
Nov 12 10:56:08.868: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007582399s
Nov 12 10:56:10.871: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009867429s
Nov 12 10:56:12.874: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013051766s
Nov 12 10:56:14.876: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015491303s
Nov 12 10:56:16.879: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017761057s
STEP: Saw pod success
Nov 12 10:56:16.879: INFO: Pod "pod-8b839943-3c7c-41d5-aea7-fddb1f3be701" satisfied condition "success or failure"
Nov 12 10:56:16.881: INFO: Trying to get logs from node node4 pod pod-8b839943-3c7c-41d5-aea7-fddb1f3be701 container test-container: 
STEP: delete the pod
Nov 12 10:56:16.897: INFO: Waiting for pod pod-8b839943-3c7c-41d5-aea7-fddb1f3be701 to disappear
Nov 12 10:56:16.898: INFO: Pod pod-8b839943-3c7c-41d5-aea7-fddb1f3be701 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:56:16.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5657" for this suite.

• [SLOW TEST:12.060 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:56:16.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Nov 12 10:56:27.434: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2477 pod-service-account-012e995e-37c7-4098-b372-09c41395f4b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Nov 12 10:56:27.680: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2477 pod-service-account-012e995e-37c7-4098-b372-09c41395f4b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Nov 12 10:56:27.953: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2477 pod-service-account-012e995e-37c7-4098-b372-09c41395f4b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:56:28.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2477" for this suite.

• [SLOW TEST:11.303 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":166,"skipped":2654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:56:28.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Nov 12 10:56:38.753: INFO: Successfully updated pod "annotationupdatefc236df1-9b71-4bf1-ad6d-1c82e8d6d7bb"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:56:40.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5605" for this suite.

• [SLOW TEST:12.565 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2738,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:56:40.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Nov 12 10:57:02.813: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 12 10:57:02.815: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 12 10:57:04.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 12 10:57:04.818: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 12 10:57:06.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 12 10:57:06.818: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 12 10:57:08.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 12 10:57:08.818: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 12 10:57:10.815: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 12 10:57:10.818: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:57:10.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6689" for this suite.

• [SLOW TEST:30.062 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2756,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:57:10.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Nov 12 10:57:10.871: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:10.873: INFO: Number of nodes with available pods: 0
Nov 12 10:57:10.873: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:11.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:11.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:11.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:12.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:12.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:12.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:13.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:13.879: INFO: Number of nodes with available pods: 0
Nov 12 10:57:13.879: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:14.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:14.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:14.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:15.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:15.878: INFO: Number of nodes with available pods: 0
Nov 12 10:57:15.878: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:16.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:16.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:16.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:17.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:17.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:17.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:18.880: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:18.883: INFO: Number of nodes with available pods: 0
Nov 12 10:57:18.883: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:19.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:19.880: INFO: Number of nodes with available pods: 0
Nov 12 10:57:19.880: INFO: Node node1 is running more than one daemon pod
Nov 12 10:57:20.877: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:20.880: INFO: Number of nodes with available pods: 4
Nov 12 10:57:20.880: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Nov 12 10:57:20.890: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:20.892: INFO: Number of nodes with available pods: 3
Nov 12 10:57:20.892: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:21.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:21.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:21.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:22.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:22.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:22.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:23.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:23.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:23.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:24.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:24.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:24.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:25.897: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:25.900: INFO: Number of nodes with available pods: 3
Nov 12 10:57:25.900: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:26.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:26.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:26.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:27.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:27.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:27.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:28.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:28.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:28.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:29.897: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:29.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:29.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:30.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:30.899: INFO: Number of nodes with available pods: 3
Nov 12 10:57:30.899: INFO: Node node3 is running more than one daemon pod
Nov 12 10:57:31.896: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 10:57:31.899: INFO: Number of nodes with available pods: 4
Nov 12 10:57:31.899: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3257, will wait for the garbage collector to delete the pods
Nov 12 10:57:31.959: INFO: Deleting DaemonSet.extensions daemon-set took: 3.652078ms
Nov 12 10:57:32.259: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.269047ms
Nov 12 10:57:38.861: INFO: Number of nodes with available pods: 0
Nov 12 10:57:38.861: INFO: Number of running nodes: 0, number of available pods: 0
Nov 12 10:57:38.863: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3257/daemonsets","resourceVersion":"22213"},"items":null}

Nov 12 10:57:38.867: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3257/pods","resourceVersion":"22213"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:57:38.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3257" for this suite.

• [SLOW TEST:28.048 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":169,"skipped":2760,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:57:38.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:57:38.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Nov 12 10:57:41.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 create -f -'
Nov 12 10:57:42.152: INFO: stderr: ""
Nov 12 10:57:42.152: INFO: stdout: "e2e-test-crd-publish-openapi-1458-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Nov 12 10:57:42.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 delete e2e-test-crd-publish-openapi-1458-crds test-cr'
Nov 12 10:57:42.304: INFO: stderr: ""
Nov 12 10:57:42.305: INFO: stdout: "e2e-test-crd-publish-openapi-1458-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Nov 12 10:57:42.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 apply -f -'
Nov 12 10:57:42.524: INFO: stderr: ""
Nov 12 10:57:42.524: INFO: stdout: "e2e-test-crd-publish-openapi-1458-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Nov 12 10:57:42.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4225 delete e2e-test-crd-publish-openapi-1458-crds test-cr'
Nov 12 10:57:42.677: INFO: stderr: ""
Nov 12 10:57:42.677: INFO: stdout: "e2e-test-crd-publish-openapi-1458-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Nov 12 10:57:42.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1458-crds'
Nov 12 10:57:42.893: INFO: stderr: ""
Nov 12 10:57:42.894: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1458-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:57:45.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4225" for this suite.

• [SLOW TEST:6.962 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":170,"skipped":2763,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:57:45.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:57:45.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Nov 12 10:57:47.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 create -f -'
Nov 12 10:57:48.127: INFO: stderr: ""
Nov 12 10:57:48.127: INFO: stdout: "e2e-test-crd-publish-openapi-8089-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Nov 12 10:57:48.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 delete e2e-test-crd-publish-openapi-8089-crds test-foo'
Nov 12 10:57:48.272: INFO: stderr: ""
Nov 12 10:57:48.272: INFO: stdout: "e2e-test-crd-publish-openapi-8089-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Nov 12 10:57:48.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 apply -f -'
Nov 12 10:57:48.497: INFO: stderr: ""
Nov 12 10:57:48.497: INFO: stdout: "e2e-test-crd-publish-openapi-8089-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Nov 12 10:57:48.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 delete e2e-test-crd-publish-openapi-8089-crds test-foo'
Nov 12 10:57:48.680: INFO: stderr: ""
Nov 12 10:57:48.680: INFO: stdout: "e2e-test-crd-publish-openapi-8089-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Nov 12 10:57:48.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 create -f -'
Nov 12 10:57:48.868: INFO: rc: 1
Nov 12 10:57:48.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 apply -f -'
Nov 12 10:57:49.074: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Nov 12 10:57:49.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 create -f -'
Nov 12 10:57:49.293: INFO: rc: 1
Nov 12 10:57:49.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8832 apply -f -'
Nov 12 10:57:49.477: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Nov 12 10:57:49.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8089-crds'
Nov 12 10:57:49.710: INFO: stderr: ""
Nov 12 10:57:49.710: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8089-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Nov 12 10:57:49.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8089-crds.metadata'
Nov 12 10:57:49.927: INFO: stderr: ""
Nov 12 10:57:49.927: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8089-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Nov 12 10:57:49.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8089-crds.spec'
Nov 12 10:57:50.141: INFO: stderr: ""
Nov 12 10:57:50.141: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8089-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Nov 12 10:57:50.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8089-crds.spec.bars'
Nov 12 10:57:50.371: INFO: stderr: ""
Nov 12 10:57:50.371: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8089-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Nov 12 10:57:50.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8089-crds.spec.bars2'
Nov 12 10:57:50.588: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:57:53.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8832" for this suite.

• [SLOW TEST:7.671 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":171,"skipped":2776,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:57:53.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:57:53.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Nov 12 10:57:54.087: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:57:54Z generation:1 name:name1 resourceVersion:22332 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ec31e23-1aaa-4866-ac6e-0a5b49bc3d02] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Nov 12 10:58:04.090: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:58:04Z generation:1 name:name2 resourceVersion:22358 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:39fa53fe-2310-43ee-841d-2a69f597e6be] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Nov 12 10:58:14.094: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:57:54Z generation:2 name:name1 resourceVersion:22380 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ec31e23-1aaa-4866-ac6e-0a5b49bc3d02] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Nov 12 10:58:24.098: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:58:04Z generation:2 name:name2 resourceVersion:22402 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:39fa53fe-2310-43ee-841d-2a69f597e6be] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Nov 12 10:58:34.103: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:57:54Z generation:2 name:name1 resourceVersion:22424 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6ec31e23-1aaa-4866-ac6e-0a5b49bc3d02] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Nov 12 10:58:44.107: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-12T10:58:04Z generation:2 name:name2 resourceVersion:22446 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:39fa53fe-2310-43ee-841d-2a69f597e6be] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:58:54.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4343" for this suite.

• [SLOW TEST:61.103 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":172,"skipped":2796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:58:54.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-9ae393a9-3536-41ec-8b9e-658b356cef72
STEP: Creating a pod to test consume configMaps
Nov 12 10:58:54.652: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d" in namespace "configmap-4303" to be "success or failure"
Nov 12 10:58:54.654: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098157ms
Nov 12 10:58:56.657: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004831747s
Nov 12 10:58:58.659: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007313775s
Nov 12 10:59:00.662: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009875798s
Nov 12 10:59:02.664: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012774803s
Nov 12 10:59:04.667: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015479303s
STEP: Saw pod success
Nov 12 10:59:04.667: INFO: Pod "pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d" satisfied condition "success or failure"
Nov 12 10:59:04.670: INFO: Trying to get logs from node node2 pod pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d container configmap-volume-test: 
STEP: delete the pod
Nov 12 10:59:04.688: INFO: Waiting for pod pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d to disappear
Nov 12 10:59:04.690: INFO: Pod pod-configmaps-dd2c1689-2200-446f-abcd-1cf4abf73d5d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:59:04.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4303" for this suite.

• [SLOW TEST:10.073 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2862,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:59:04.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Nov 12 10:59:04.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Nov 12 10:59:04.837: INFO: stderr: ""
Nov 12 10:59:04.837: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nk8s.cni.cncf.io/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:59:04.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5328" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":174,"skipped":2871,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:59:04.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-dc1e04e7-1068-4844-a328-d4b8843f1587
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-dc1e04e7-1068-4844-a328-d4b8843f1587
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:59:16.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1681" for this suite.

• [SLOW TEST:12.066 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2899,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:59:16.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 10:59:16.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1666'
Nov 12 10:59:17.066: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 12 10:59:17.066: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Nov 12 10:59:17.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1666'
Nov 12 10:59:17.179: INFO: stderr: ""
Nov 12 10:59:17.179: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:59:17.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1666" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":176,"skipped":2917,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:59:17.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 10:59:17.212: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 12 10:59:27.216: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Nov 12 10:59:29.219: INFO: Creating deployment "test-rollover-deployment"
Nov 12 10:59:29.224: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Nov 12 10:59:31.228: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Nov 12 10:59:31.232: INFO: Ensure that both replica sets have 1 created replica
Nov 12 10:59:31.237: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Nov 12 10:59:31.242: INFO: Updating deployment test-rollover-deployment
Nov 12 10:59:31.242: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Nov 12 10:59:33.246: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Nov 12 10:59:33.250: INFO: Make sure deployment "test-rollover-deployment" is complete
Nov 12 10:59:33.255: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:33.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775571, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:35.262: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:35.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775571, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:37.261: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:37.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775571, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:39.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:39.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775571, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:41.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:41.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775571, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:43.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:43.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775581, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:45.261: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:45.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775581, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:47.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:47.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775581, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:49.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:49.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775581, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:51.260: INFO: all replica sets need to contain the pod-template-hash label
Nov 12 10:59:51.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775581, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775569, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 10:59:53.260: INFO: 
Nov 12 10:59:53.260: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Nov 12 10:59:53.265: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-9451 /apis/apps/v1/namespaces/deployment-9451/deployments/test-rollover-deployment 002350ad-f3c0-4953-8b37-3ace923f9199 22764 2 2020-11-12 10:59:29 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00502a408  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-12 10:59:29 +0000 UTC,LastTransitionTime:2020-11-12 10:59:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-11-12 10:59:51 +0000 UTC,LastTransitionTime:2020-11-12 10:59:29 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Nov 12 10:59:53.268: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-9451 /apis/apps/v1/namespaces/deployment-9451/replicasets/test-rollover-deployment-574d6dfbff 807b5489-1ba5-4833-9e1a-eacf29a2bc54 22753 2 2020-11-12 10:59:31 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 002350ad-f3c0-4953-8b37-3ace923f9199 0xc00502abb7 0xc00502abb8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00502ac78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:59:53.268: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Nov 12 10:59:53.268: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-9451 /apis/apps/v1/namespaces/deployment-9451/replicasets/test-rollover-controller 06438f4d-c25b-4934-bf80-a2f22ee6f7fe 22762 2 2020-11-12 10:59:17 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 002350ad-f3c0-4953-8b37-3ace923f9199 0xc00502aa67 0xc00502aa68}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00502ab28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:59:53.269: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-9451 /apis/apps/v1/namespaces/deployment-9451/replicasets/test-rollover-deployment-f6c94f66c 5f5b1a8a-be46-491b-87e4-3e3ee7c216ec 22696 2 2020-11-12 10:59:29 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 002350ad-f3c0-4953-8b37-3ace923f9199 0xc00502ad00 0xc00502ad01}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00502adc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Nov 12 10:59:53.271: INFO: Pod "test-rollover-deployment-574d6dfbff-tw5pv" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-tw5pv test-rollover-deployment-574d6dfbff- deployment-9451 /api/v1/namespaces/deployment-9451/pods/test-rollover-deployment-574d6dfbff-tw5pv 50efb534-1f2a-46fb-9084-382f3771bb7f 22729 0 2020-11-12 10:59:31 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.4.95"
    ],
    "mac": "0a:58:0a:f4:04:5f",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 807b5489-1ba5-4833-9e1a-eacf29a2bc54 0xc00502b4f7 0xc00502b4f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tmm8k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tmm8k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tmm8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:59:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:59:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 10:59:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.16,PodIP:10.244.4.95,StartTime:2020-11-12 10:59:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 10:59:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://14e0b82a9f6de00a00b2dc79b5eade688b3ba374c3fb8c979373c83878f94962,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 10:59:53.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9451" for this suite.

• [SLOW TEST:36.087 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":177,"skipped":2922,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 10:59:53.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-4328
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 12 10:59:53.293: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Nov 12 11:00:29.341: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.78 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:00:29.341: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:00:30.460: INFO: Found all expected endpoints: [netserver-0]
Nov 12 11:00:30.462: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.65 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:00:30.462: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:00:31.578: INFO: Found all expected endpoints: [netserver-1]
Nov 12 11:00:31.580: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.84 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:00:31.580: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:00:32.684: INFO: Found all expected endpoints: [netserver-2]
Nov 12 11:00:32.685: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.96 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4328 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:00:32.686: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:00:33.795: INFO: Found all expected endpoints: [netserver-3]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:00:33.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4328" for this suite.

• [SLOW TEST:40.525 seconds]
[sig-network] Networking
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2950,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:00:33.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Nov 12 11:00:33.820: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Nov 12 11:00:33.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:34.089: INFO: stderr: ""
Nov 12 11:00:34.089: INFO: stdout: "service/agnhost-slave created\n"
Nov 12 11:00:34.096: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Nov 12 11:00:34.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:34.315: INFO: stderr: ""
Nov 12 11:00:34.315: INFO: stdout: "service/agnhost-master created\n"
Nov 12 11:00:34.315: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Nov 12 11:00:34.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:34.541: INFO: stderr: ""
Nov 12 11:00:34.541: INFO: stdout: "service/frontend created\n"
Nov 12 11:00:34.541: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Nov 12 11:00:34.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:34.760: INFO: stderr: ""
Nov 12 11:00:34.760: INFO: stdout: "deployment.apps/frontend created\n"
Nov 12 11:00:34.760: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Nov 12 11:00:34.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:34.989: INFO: stderr: ""
Nov 12 11:00:34.989: INFO: stdout: "deployment.apps/agnhost-master created\n"
Nov 12 11:00:34.991: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Nov 12 11:00:34.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Nov 12 11:00:35.217: INFO: stderr: ""
Nov 12 11:00:35.217: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Nov 12 11:00:35.218: INFO: Waiting for all frontend pods to be Running.
Nov 12 11:00:50.277: INFO: Waiting for frontend to serve content.
Nov 12 11:00:50.284: INFO: Trying to add a new entry to the guestbook.
Nov 12 11:00:50.296: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Nov 12 11:00:50.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:50.433: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:50.433: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Nov 12 11:00:50.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:50.574: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:50.574: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Nov 12 11:00:50.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:50.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:50.697: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Nov 12 11:00:50.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:50.816: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:50.816: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Nov 12 11:00:50.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:50.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:50.948: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Nov 12 11:00:50.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Nov 12 11:00:51.080: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:00:51.080: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:00:51.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-282" for this suite.

• [SLOW TEST:17.288 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":179,"skipped":2957,"failed":0}
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:00:51.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Nov 12 11:01:03.629: INFO: Successfully updated pod "pod-update-3a1bdcfb-df55-477b-8895-de0e5155f93a"
STEP: verifying the updated pod is in kubernetes
Nov 12 11:01:03.633: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:01:03.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5381" for this suite.

• [SLOW TEST:12.546 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2957,"failed":0}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:01:03.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Nov 12 11:01:03.654: INFO: Waiting up to 5m0s for pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47" in namespace "downward-api-5094" to be "success or failure"
Nov 12 11:01:03.655: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 1.507936ms
Nov 12 11:01:05.658: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004279115s
Nov 12 11:01:07.660: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006550246s
Nov 12 11:01:09.663: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009265865s
Nov 12 11:01:11.666: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012058071s
Nov 12 11:01:13.669: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014662868s
STEP: Saw pod success
Nov 12 11:01:13.669: INFO: Pod "downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47" satisfied condition "success or failure"
Nov 12 11:01:13.671: INFO: Trying to get logs from node node4 pod downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47 container dapi-container: 
STEP: delete the pod
Nov 12 11:01:13.687: INFO: Waiting for pod downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47 to disappear
Nov 12 11:01:13.689: INFO: Pod downward-api-5d46e989-fb05-487f-8068-4c62c8b5fa47 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:01:13.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5094" for this suite.

• [SLOW TEST:10.057 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2962,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:01:13.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Nov 12 11:01:13.714: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6820" to be "success or failure"
Nov 12 11:01:13.716: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 1.551361ms
Nov 12 11:01:15.718: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004101572s
Nov 12 11:01:17.721: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006725759s
Nov 12 11:01:19.724: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009797047s
Nov 12 11:01:21.726: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012498497s
Nov 12 11:01:23.730: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015568844s
STEP: Saw pod success
Nov 12 11:01:23.730: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Nov 12 11:01:23.732: INFO: Trying to get logs from node node3 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Nov 12 11:01:23.839: INFO: Waiting for pod pod-host-path-test to disappear
Nov 12 11:01:23.841: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:01:23.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6820" for this suite.

• [SLOW TEST:10.153 seconds]
[sig-storage] HostPath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2972,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:01:23.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Nov 12 11:01:23.866: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Nov 12 11:01:23.875: INFO: Waiting for terminating namespaces to be deleted...
Nov 12 11:01:23.876: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Nov 12 11:01:23.888: INFO: tiller-deploy-58f6ff6c77-zrmnw from kube-system started at 2020-11-12 09:47:10 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container tiller ready: true, restart count 1
Nov 12 11:01:23.888: INFO: registry-proxy-txrdh from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:01:23.888: INFO: kube-proxy-m6bqr from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 11:01:23.888: INFO: nginx-proxy-node1 from kube-system started at 2020-11-12 09:44:33 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:01:23.888: INFO: kube-flannel-z5kqm from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:01:23.888: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 12 11:01:23.888: INFO: kube-multus-ds-amd64-k4qcb from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:01:23.888: INFO: nodelocaldns-kpvsh from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.888: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 11:01:23.888: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Nov 12 11:01:23.899: INFO: nginx-proxy-node2 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:01:23.899: INFO: kube-proxy-bbzk5 from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 11:01:23.899: INFO: kube-flannel-gsk24 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:01:23.899: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 11:01:23.899: INFO: kube-multus-ds-amd64-8cjwp from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:01:23.899: INFO: nodelocaldns-ss57m from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 11:01:23.899: INFO: registry-proxy-lsxh9 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.899: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:01:23.899: INFO: 
Logging pods the kubelet thinks is on node node3 before test
Nov 12 11:01:23.904: INFO: kube-proxy-4b76p from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 11:01:23.904: INFO: nginx-proxy-node3 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 12 11:01:23.904: INFO: kube-multus-ds-amd64-vwl4k from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:01:23.904: INFO: registry-proxy-njmcx from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:01:23.904: INFO: kube-flannel-r9726 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:01:23.904: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 12 11:01:23.904: INFO: registry-9pgcj from kube-system started at 2020-11-12 09:47:38 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container registry ready: true, restart count 1
Nov 12 11:01:23.904: INFO: nodelocaldns-jw5xn from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.904: INFO: 	Container node-cache ready: true, restart count 2
Nov 12 11:01:23.904: INFO: 
Logging pods the kubelet thinks is on node node4 before test
Nov 12 11:01:23.911: INFO: kube-proxy-qsp5l from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 11:01:23.911: INFO: registry-proxy-zvv86 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:01:23.911: INFO: nginx-proxy-node4 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:01:23.911: INFO: kube-flannel-jbkp2 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:01:23.911: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 11:01:23.911: INFO: kube-multus-ds-amd64-44jqf from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:01:23.911: INFO: coredns-58687784f9-c4bt6 from kube-system started at 2020-11-12 09:46:39 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container coredns ready: true, restart count 1
Nov 12 11:01:23.911: INFO: nodelocaldns-4cm4z from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:01:23.911: INFO: 	Container node-cache ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e2eb44f9-e17a-4408-a670-636d7428354c 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-e2eb44f9-e17a-4408-a670-636d7428354c off the node node3
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e2eb44f9-e17a-4408-a670-636d7428354c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:01:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8898" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:24.105 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":183,"skipped":2984,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:01:47.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:01:48.604: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:01:50.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:01:52.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:01:54.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:01:56.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:01:58.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775708, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:02:01.620: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:02:01.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:02:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2616" for this suite.
STEP: Destroying namespace "webhook-2616-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.280 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":184,"skipped":2992,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:02:02.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:02:02.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34" in namespace "downward-api-2661" to be "success or failure"
Nov 12 11:02:02.255: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031571ms
Nov 12 11:02:04.258: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004921081s
Nov 12 11:02:06.261: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007620926s
Nov 12 11:02:08.263: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010305257s
Nov 12 11:02:10.266: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012963797s
Nov 12 11:02:12.269: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015741334s
STEP: Saw pod success
Nov 12 11:02:12.269: INFO: Pod "downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34" satisfied condition "success or failure"
Nov 12 11:02:12.271: INFO: Trying to get logs from node node3 pod downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34 container client-container: 
STEP: delete the pod
Nov 12 11:02:12.282: INFO: Waiting for pod downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34 to disappear
Nov 12 11:02:12.284: INFO: Pod downwardapi-volume-4f992ad5-c2ae-4084-8047-099fa0ed9e34 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:02:12.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2661" for this suite.

• [SLOW TEST:10.055 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:02:12.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:02:12.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9349
I1112 11:02:12.326595      10 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9349, replica count: 1
I1112 11:02:13.377153      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:14.377394      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:15.377657      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:16.377902      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:17.378153      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:18.378431      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:19.381360      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:20.382917      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:21.383168      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:02:22.383418      10 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 12 11:02:22.488: INFO: Created: latency-svc-h6xlh
Nov 12 11:02:22.491: INFO: Got endpoints: latency-svc-h6xlh [8.011819ms]
Nov 12 11:02:22.495: INFO: Created: latency-svc-wc5sm
Nov 12 11:02:22.496: INFO: Created: latency-svc-7q5sn
Nov 12 11:02:22.496: INFO: Got endpoints: latency-svc-wc5sm [5.066814ms]
Nov 12 11:02:22.498: INFO: Created: latency-svc-p2btx
Nov 12 11:02:22.498: INFO: Got endpoints: latency-svc-7q5sn [6.835837ms]
Nov 12 11:02:22.499: INFO: Created: latency-svc-8zcd9
Nov 12 11:02:22.499: INFO: Got endpoints: latency-svc-p2btx [7.764224ms]
Nov 12 11:02:22.500: INFO: Created: latency-svc-4nzz4
Nov 12 11:02:22.500: INFO: Got endpoints: latency-svc-8zcd9 [8.771416ms]
Nov 12 11:02:22.502: INFO: Got endpoints: latency-svc-4nzz4 [10.46628ms]
Nov 12 11:02:22.502: INFO: Created: latency-svc-x9v9w
Nov 12 11:02:22.503: INFO: Created: latency-svc-6qvxf
Nov 12 11:02:22.503: INFO: Got endpoints: latency-svc-x9v9w [11.870212ms]
Nov 12 11:02:22.505: INFO: Got endpoints: latency-svc-6qvxf [13.202446ms]
Nov 12 11:02:22.505: INFO: Created: latency-svc-c6bdw
Nov 12 11:02:22.506: INFO: Created: latency-svc-tz5vb
Nov 12 11:02:22.506: INFO: Got endpoints: latency-svc-c6bdw [15.019734ms]
Nov 12 11:02:22.508: INFO: Got endpoints: latency-svc-tz5vb [15.999807ms]
Nov 12 11:02:22.508: INFO: Created: latency-svc-5dbn5
Nov 12 11:02:22.510: INFO: Created: latency-svc-wq6vg
Nov 12 11:02:22.510: INFO: Got endpoints: latency-svc-5dbn5 [19.028314ms]
Nov 12 11:02:22.511: INFO: Got endpoints: latency-svc-wq6vg [19.354648ms]
Nov 12 11:02:22.511: INFO: Created: latency-svc-499d9
Nov 12 11:02:22.512: INFO: Created: latency-svc-xklz5
Nov 12 11:02:22.512: INFO: Got endpoints: latency-svc-499d9 [20.583879ms]
Nov 12 11:02:22.513: INFO: Created: latency-svc-85jz7
Nov 12 11:02:22.513: INFO: Got endpoints: latency-svc-xklz5 [21.802703ms]
Nov 12 11:02:22.515: INFO: Created: latency-svc-gtlw8
Nov 12 11:02:22.515: INFO: Got endpoints: latency-svc-85jz7 [23.067168ms]
Nov 12 11:02:22.516: INFO: Created: latency-svc-ftn9t
Nov 12 11:02:22.516: INFO: Got endpoints: latency-svc-gtlw8 [23.981247ms]
Nov 12 11:02:22.517: INFO: Got endpoints: latency-svc-ftn9t [20.688494ms]
Nov 12 11:02:22.517: INFO: Created: latency-svc-w7m4g
Nov 12 11:02:22.518: INFO: Created: latency-svc-2r2sh
Nov 12 11:02:22.518: INFO: Got endpoints: latency-svc-w7m4g [20.298497ms]
Nov 12 11:02:22.520: INFO: Created: latency-svc-f74l6
Nov 12 11:02:22.520: INFO: Got endpoints: latency-svc-2r2sh [20.246383ms]
Nov 12 11:02:22.521: INFO: Created: latency-svc-hmpvc
Nov 12 11:02:22.521: INFO: Got endpoints: latency-svc-f74l6 [20.466613ms]
Nov 12 11:02:22.522: INFO: Created: latency-svc-lvxz4
Nov 12 11:02:22.522: INFO: Got endpoints: latency-svc-hmpvc [20.537573ms]
Nov 12 11:02:22.523: INFO: Created: latency-svc-46gwj
Nov 12 11:02:22.523: INFO: Got endpoints: latency-svc-lvxz4 [19.757882ms]
Nov 12 11:02:22.524: INFO: Created: latency-svc-r9wm8
Nov 12 11:02:22.525: INFO: Got endpoints: latency-svc-46gwj [20.143807ms]
Nov 12 11:02:22.526: INFO: Got endpoints: latency-svc-r9wm8 [19.627773ms]
Nov 12 11:02:22.526: INFO: Created: latency-svc-cf978
Nov 12 11:02:22.527: INFO: Created: latency-svc-s46zx
Nov 12 11:02:22.527: INFO: Got endpoints: latency-svc-cf978 [19.477748ms]
Nov 12 11:02:22.528: INFO: Got endpoints: latency-svc-s46zx [17.933219ms]
Nov 12 11:02:22.529: INFO: Created: latency-svc-z6bg2
Nov 12 11:02:22.530: INFO: Got endpoints: latency-svc-z6bg2 [18.930701ms]
Nov 12 11:02:22.530: INFO: Created: latency-svc-pqwsc
Nov 12 11:02:22.531: INFO: Got endpoints: latency-svc-pqwsc [19.285111ms]
Nov 12 11:02:22.532: INFO: Created: latency-svc-4qt4x
Nov 12 11:02:22.532: INFO: Created: latency-svc-mrfnl
Nov 12 11:02:22.533: INFO: Got endpoints: latency-svc-4qt4x [19.156971ms]
Nov 12 11:02:22.534: INFO: Got endpoints: latency-svc-mrfnl [19.752103ms]
Nov 12 11:02:22.534: INFO: Created: latency-svc-gvlpv
Nov 12 11:02:22.535: INFO: Created: latency-svc-p4482
Nov 12 11:02:22.536: INFO: Created: latency-svc-g6w92
Nov 12 11:02:22.538: INFO: Created: latency-svc-5mgsl
Nov 12 11:02:22.539: INFO: Created: latency-svc-7gvn6
Nov 12 11:02:22.539: INFO: Got endpoints: latency-svc-gvlpv [23.522289ms]
Nov 12 11:02:22.540: INFO: Created: latency-svc-rkqhw
Nov 12 11:02:22.542: INFO: Created: latency-svc-nffbx
Nov 12 11:02:22.543: INFO: Created: latency-svc-gdd86
Nov 12 11:02:22.544: INFO: Created: latency-svc-cd8nc
Nov 12 11:02:22.546: INFO: Created: latency-svc-fszk8
Nov 12 11:02:22.547: INFO: Created: latency-svc-zztmz
Nov 12 11:02:22.548: INFO: Created: latency-svc-cdxf4
Nov 12 11:02:22.549: INFO: Created: latency-svc-88nx8
Nov 12 11:02:22.551: INFO: Created: latency-svc-sdj6g
Nov 12 11:02:22.552: INFO: Created: latency-svc-fv9kf
Nov 12 11:02:22.553: INFO: Created: latency-svc-dpz95
Nov 12 11:02:22.590: INFO: Got endpoints: latency-svc-p4482 [72.574169ms]
Nov 12 11:02:22.593: INFO: Created: latency-svc-gnzs7
Nov 12 11:02:22.640: INFO: Got endpoints: latency-svc-g6w92 [121.322192ms]
Nov 12 11:02:22.643: INFO: Created: latency-svc-96v9s
Nov 12 11:02:22.690: INFO: Got endpoints: latency-svc-5mgsl [170.163793ms]
Nov 12 11:02:22.693: INFO: Created: latency-svc-n24bg
Nov 12 11:02:22.740: INFO: Got endpoints: latency-svc-7gvn6 [218.553624ms]
Nov 12 11:02:22.743: INFO: Created: latency-svc-v2rwc
Nov 12 11:02:22.790: INFO: Got endpoints: latency-svc-rkqhw [267.345442ms]
Nov 12 11:02:22.793: INFO: Created: latency-svc-csxkq
Nov 12 11:02:22.839: INFO: Got endpoints: latency-svc-nffbx [316.123344ms]
Nov 12 11:02:22.842: INFO: Created: latency-svc-44fwk
Nov 12 11:02:22.890: INFO: Got endpoints: latency-svc-gdd86 [364.416059ms]
Nov 12 11:02:22.893: INFO: Created: latency-svc-77jr4
Nov 12 11:02:22.939: INFO: Got endpoints: latency-svc-cd8nc [413.245099ms]
Nov 12 11:02:22.942: INFO: Created: latency-svc-nkqn9
Nov 12 11:02:22.991: INFO: Got endpoints: latency-svc-fszk8 [463.335447ms]
Nov 12 11:02:22.994: INFO: Created: latency-svc-lqndj
Nov 12 11:02:23.040: INFO: Got endpoints: latency-svc-zztmz [511.301814ms]
Nov 12 11:02:23.043: INFO: Created: latency-svc-2gmdg
Nov 12 11:02:23.090: INFO: Got endpoints: latency-svc-cdxf4 [559.910932ms]
Nov 12 11:02:23.093: INFO: Created: latency-svc-xld28
Nov 12 11:02:23.139: INFO: Got endpoints: latency-svc-88nx8 [608.052353ms]
Nov 12 11:02:23.142: INFO: Created: latency-svc-slffs
Nov 12 11:02:23.190: INFO: Got endpoints: latency-svc-sdj6g [657.036613ms]
Nov 12 11:02:23.193: INFO: Created: latency-svc-9lrnd
Nov 12 11:02:23.240: INFO: Got endpoints: latency-svc-fv9kf [705.24321ms]
Nov 12 11:02:23.243: INFO: Created: latency-svc-sf2wz
Nov 12 11:02:23.290: INFO: Got endpoints: latency-svc-dpz95 [750.342423ms]
Nov 12 11:02:23.293: INFO: Created: latency-svc-nqrx8
Nov 12 11:02:23.340: INFO: Got endpoints: latency-svc-gnzs7 [749.784871ms]
Nov 12 11:02:23.343: INFO: Created: latency-svc-ndhqs
Nov 12 11:02:23.391: INFO: Got endpoints: latency-svc-96v9s [750.777323ms]
Nov 12 11:02:23.394: INFO: Created: latency-svc-zhdfd
Nov 12 11:02:23.440: INFO: Got endpoints: latency-svc-n24bg [749.768856ms]
Nov 12 11:02:23.443: INFO: Created: latency-svc-g2xnw
Nov 12 11:02:23.490: INFO: Got endpoints: latency-svc-v2rwc [750.468348ms]
Nov 12 11:02:23.493: INFO: Created: latency-svc-74qlb
Nov 12 11:02:23.540: INFO: Got endpoints: latency-svc-csxkq [749.997555ms]
Nov 12 11:02:23.543: INFO: Created: latency-svc-78btt
Nov 12 11:02:23.590: INFO: Got endpoints: latency-svc-44fwk [750.55792ms]
Nov 12 11:02:23.593: INFO: Created: latency-svc-9bwhm
Nov 12 11:02:23.640: INFO: Got endpoints: latency-svc-77jr4 [749.855768ms]
Nov 12 11:02:23.643: INFO: Created: latency-svc-lmc6c
Nov 12 11:02:23.690: INFO: Got endpoints: latency-svc-nkqn9 [750.262265ms]
Nov 12 11:02:23.693: INFO: Created: latency-svc-8qzlv
Nov 12 11:02:23.740: INFO: Got endpoints: latency-svc-lqndj [749.433261ms]
Nov 12 11:02:23.743: INFO: Created: latency-svc-ft96h
Nov 12 11:02:23.790: INFO: Got endpoints: latency-svc-2gmdg [750.098058ms]
Nov 12 11:02:23.793: INFO: Created: latency-svc-72slh
Nov 12 11:02:23.840: INFO: Got endpoints: latency-svc-xld28 [749.739708ms]
Nov 12 11:02:23.843: INFO: Created: latency-svc-d82q4
Nov 12 11:02:23.890: INFO: Got endpoints: latency-svc-slffs [750.332141ms]
Nov 12 11:02:23.893: INFO: Created: latency-svc-sck7v
Nov 12 11:02:23.940: INFO: Got endpoints: latency-svc-9lrnd [750.0364ms]
Nov 12 11:02:23.943: INFO: Created: latency-svc-2bb9w
Nov 12 11:02:23.990: INFO: Got endpoints: latency-svc-sf2wz [750.159604ms]
Nov 12 11:02:23.993: INFO: Created: latency-svc-wdm9s
Nov 12 11:02:24.040: INFO: Got endpoints: latency-svc-nqrx8 [750.051624ms]
Nov 12 11:02:24.043: INFO: Created: latency-svc-t5q7h
Nov 12 11:02:24.090: INFO: Got endpoints: latency-svc-ndhqs [750.228437ms]
Nov 12 11:02:24.094: INFO: Created: latency-svc-jkl95
Nov 12 11:02:24.140: INFO: Got endpoints: latency-svc-zhdfd [749.099148ms]
Nov 12 11:02:24.143: INFO: Created: latency-svc-vjwqz
Nov 12 11:02:24.191: INFO: Got endpoints: latency-svc-g2xnw [751.000528ms]
Nov 12 11:02:24.194: INFO: Created: latency-svc-7lpfq
Nov 12 11:02:24.240: INFO: Got endpoints: latency-svc-74qlb [749.565044ms]
Nov 12 11:02:24.243: INFO: Created: latency-svc-8d752
Nov 12 11:02:24.290: INFO: Got endpoints: latency-svc-78btt [749.909513ms]
Nov 12 11:02:24.294: INFO: Created: latency-svc-wj4m9
Nov 12 11:02:24.340: INFO: Got endpoints: latency-svc-9bwhm [750.021912ms]
Nov 12 11:02:24.343: INFO: Created: latency-svc-rhfhw
Nov 12 11:02:24.390: INFO: Got endpoints: latency-svc-lmc6c [750.141617ms]
Nov 12 11:02:24.393: INFO: Created: latency-svc-k82hm
Nov 12 11:02:24.440: INFO: Got endpoints: latency-svc-8qzlv [749.921806ms]
Nov 12 11:02:24.443: INFO: Created: latency-svc-9pbg9
Nov 12 11:02:24.490: INFO: Got endpoints: latency-svc-ft96h [749.685574ms]
Nov 12 11:02:24.493: INFO: Created: latency-svc-n8ncd
Nov 12 11:02:24.540: INFO: Got endpoints: latency-svc-72slh [750.034623ms]
Nov 12 11:02:24.543: INFO: Created: latency-svc-p2f4p
Nov 12 11:02:24.590: INFO: Got endpoints: latency-svc-d82q4 [749.905612ms]
Nov 12 11:02:24.593: INFO: Created: latency-svc-6mwdz
Nov 12 11:02:24.640: INFO: Got endpoints: latency-svc-sck7v [749.916788ms]
Nov 12 11:02:24.643: INFO: Created: latency-svc-f9p46
Nov 12 11:02:24.690: INFO: Got endpoints: latency-svc-2bb9w [749.668029ms]
Nov 12 11:02:24.693: INFO: Created: latency-svc-jvssk
Nov 12 11:02:24.740: INFO: Got endpoints: latency-svc-wdm9s [749.770078ms]
Nov 12 11:02:24.743: INFO: Created: latency-svc-brmss
Nov 12 11:02:24.791: INFO: Got endpoints: latency-svc-t5q7h [750.973933ms]
Nov 12 11:02:24.794: INFO: Created: latency-svc-xwsls
Nov 12 11:02:24.840: INFO: Got endpoints: latency-svc-jkl95 [749.791326ms]
Nov 12 11:02:24.843: INFO: Created: latency-svc-m2rlm
Nov 12 11:02:24.890: INFO: Got endpoints: latency-svc-vjwqz [749.980457ms]
Nov 12 11:02:24.893: INFO: Created: latency-svc-v8lf4
Nov 12 11:02:24.939: INFO: Got endpoints: latency-svc-7lpfq [748.741348ms]
Nov 12 11:02:24.942: INFO: Created: latency-svc-ltwh6
Nov 12 11:02:24.990: INFO: Got endpoints: latency-svc-8d752 [750.506716ms]
Nov 12 11:02:24.993: INFO: Created: latency-svc-smrvl
Nov 12 11:02:25.040: INFO: Got endpoints: latency-svc-wj4m9 [749.697417ms]
Nov 12 11:02:25.043: INFO: Created: latency-svc-m9492
Nov 12 11:02:25.091: INFO: Got endpoints: latency-svc-rhfhw [750.805651ms]
Nov 12 11:02:25.094: INFO: Created: latency-svc-4lqjz
Nov 12 11:02:25.140: INFO: Got endpoints: latency-svc-k82hm [749.875074ms]
Nov 12 11:02:25.142: INFO: Created: latency-svc-tdfxc
Nov 12 11:02:25.190: INFO: Got endpoints: latency-svc-9pbg9 [749.954806ms]
Nov 12 11:02:25.193: INFO: Created: latency-svc-vsmbr
Nov 12 11:02:25.240: INFO: Got endpoints: latency-svc-n8ncd [749.905007ms]
Nov 12 11:02:25.243: INFO: Created: latency-svc-7p766
Nov 12 11:02:25.290: INFO: Got endpoints: latency-svc-p2f4p [749.748598ms]
Nov 12 11:02:25.293: INFO: Created: latency-svc-4v787
Nov 12 11:02:25.340: INFO: Got endpoints: latency-svc-6mwdz [750.036615ms]
Nov 12 11:02:25.343: INFO: Created: latency-svc-xrzj5
Nov 12 11:02:25.390: INFO: Got endpoints: latency-svc-f9p46 [750.398178ms]
Nov 12 11:02:25.393: INFO: Created: latency-svc-4q8pp
Nov 12 11:02:25.439: INFO: Got endpoints: latency-svc-jvssk [749.937822ms]
Nov 12 11:02:25.442: INFO: Created: latency-svc-7sckd
Nov 12 11:02:25.491: INFO: Got endpoints: latency-svc-brmss [751.146093ms]
Nov 12 11:02:25.494: INFO: Created: latency-svc-vcb7s
Nov 12 11:02:25.540: INFO: Got endpoints: latency-svc-xwsls [749.00122ms]
Nov 12 11:02:25.543: INFO: Created: latency-svc-hv7qg
Nov 12 11:02:25.590: INFO: Got endpoints: latency-svc-m2rlm [750.133837ms]
Nov 12 11:02:25.593: INFO: Created: latency-svc-4grr4
Nov 12 11:02:25.640: INFO: Got endpoints: latency-svc-v8lf4 [749.98151ms]
Nov 12 11:02:25.643: INFO: Created: latency-svc-88rbk
Nov 12 11:02:25.689: INFO: Got endpoints: latency-svc-ltwh6 [749.880114ms]
Nov 12 11:02:25.692: INFO: Created: latency-svc-m4dqj
Nov 12 11:02:25.740: INFO: Got endpoints: latency-svc-smrvl [749.527996ms]
Nov 12 11:02:25.743: INFO: Created: latency-svc-b9sb8
Nov 12 11:02:25.790: INFO: Got endpoints: latency-svc-m9492 [750.188425ms]
Nov 12 11:02:25.793: INFO: Created: latency-svc-2rdls
Nov 12 11:02:25.840: INFO: Got endpoints: latency-svc-4lqjz [748.561903ms]
Nov 12 11:02:25.842: INFO: Created: latency-svc-7v4qp
Nov 12 11:02:25.889: INFO: Got endpoints: latency-svc-tdfxc [749.759791ms]
Nov 12 11:02:25.893: INFO: Created: latency-svc-n6x7j
Nov 12 11:02:25.939: INFO: Got endpoints: latency-svc-vsmbr [749.656998ms]
Nov 12 11:02:25.943: INFO: Created: latency-svc-dhssn
Nov 12 11:02:25.991: INFO: Got endpoints: latency-svc-7p766 [751.645404ms]
Nov 12 11:02:25.994: INFO: Created: latency-svc-52266
Nov 12 11:02:26.039: INFO: Got endpoints: latency-svc-4v787 [749.694038ms]
Nov 12 11:02:26.042: INFO: Created: latency-svc-5zklx
Nov 12 11:02:26.090: INFO: Got endpoints: latency-svc-xrzj5 [749.930961ms]
Nov 12 11:02:26.094: INFO: Created: latency-svc-gckvk
Nov 12 11:02:26.140: INFO: Got endpoints: latency-svc-4q8pp [749.429913ms]
Nov 12 11:02:26.143: INFO: Created: latency-svc-wmzqh
Nov 12 11:02:26.190: INFO: Got endpoints: latency-svc-7sckd [750.209707ms]
Nov 12 11:02:26.194: INFO: Created: latency-svc-wm8r6
Nov 12 11:02:26.240: INFO: Got endpoints: latency-svc-vcb7s [748.710985ms]
Nov 12 11:02:26.243: INFO: Created: latency-svc-zvlmf
Nov 12 11:02:26.290: INFO: Got endpoints: latency-svc-hv7qg [749.782835ms]
Nov 12 11:02:26.293: INFO: Created: latency-svc-dwv6c
Nov 12 11:02:26.339: INFO: Got endpoints: latency-svc-4grr4 [749.557254ms]
Nov 12 11:02:26.344: INFO: Created: latency-svc-spl82
Nov 12 11:02:26.390: INFO: Got endpoints: latency-svc-88rbk [750.06503ms]
Nov 12 11:02:26.393: INFO: Created: latency-svc-767jh
Nov 12 11:02:26.440: INFO: Got endpoints: latency-svc-m4dqj [750.149332ms]
Nov 12 11:02:26.443: INFO: Created: latency-svc-xw92q
Nov 12 11:02:26.490: INFO: Got endpoints: latency-svc-b9sb8 [750.100438ms]
Nov 12 11:02:26.494: INFO: Created: latency-svc-jvxvf
Nov 12 11:02:26.540: INFO: Got endpoints: latency-svc-2rdls [749.739397ms]
Nov 12 11:02:26.543: INFO: Created: latency-svc-qx59t
Nov 12 11:02:26.590: INFO: Got endpoints: latency-svc-7v4qp [750.05744ms]
Nov 12 11:02:26.593: INFO: Created: latency-svc-qw4x4
Nov 12 11:02:26.639: INFO: Got endpoints: latency-svc-n6x7j [749.935422ms]
Nov 12 11:02:26.642: INFO: Created: latency-svc-xcvk6
Nov 12 11:02:26.690: INFO: Got endpoints: latency-svc-dhssn [750.286984ms]
Nov 12 11:02:26.693: INFO: Created: latency-svc-m977z
Nov 12 11:02:26.740: INFO: Got endpoints: latency-svc-52266 [748.157832ms]
Nov 12 11:02:26.743: INFO: Created: latency-svc-wmhgv
Nov 12 11:02:26.790: INFO: Got endpoints: latency-svc-5zklx [750.450752ms]
Nov 12 11:02:26.794: INFO: Created: latency-svc-mq94c
Nov 12 11:02:26.840: INFO: Got endpoints: latency-svc-gckvk [749.959176ms]
Nov 12 11:02:26.844: INFO: Created: latency-svc-fnvsk
Nov 12 11:02:26.890: INFO: Got endpoints: latency-svc-wmzqh [750.067594ms]
Nov 12 11:02:26.893: INFO: Created: latency-svc-7hrf5
Nov 12 11:02:26.940: INFO: Got endpoints: latency-svc-wm8r6 [749.781615ms]
Nov 12 11:02:26.942: INFO: Created: latency-svc-cx6qx
Nov 12 11:02:26.990: INFO: Got endpoints: latency-svc-zvlmf [750.170594ms]
Nov 12 11:02:26.993: INFO: Created: latency-svc-wwgrd
Nov 12 11:02:27.040: INFO: Got endpoints: latency-svc-dwv6c [749.786848ms]
Nov 12 11:02:27.043: INFO: Created: latency-svc-9vds2
Nov 12 11:02:27.090: INFO: Got endpoints: latency-svc-spl82 [750.423159ms]
Nov 12 11:02:27.094: INFO: Created: latency-svc-lzhd6
Nov 12 11:02:27.140: INFO: Got endpoints: latency-svc-767jh [749.6983ms]
Nov 12 11:02:27.143: INFO: Created: latency-svc-vthnj
Nov 12 11:02:27.189: INFO: Got endpoints: latency-svc-xw92q [749.762124ms]
Nov 12 11:02:27.192: INFO: Created: latency-svc-nzvsw
Nov 12 11:02:27.240: INFO: Got endpoints: latency-svc-jvxvf [749.580978ms]
Nov 12 11:02:27.242: INFO: Created: latency-svc-gtjkb
Nov 12 11:02:27.290: INFO: Got endpoints: latency-svc-qx59t [750.437052ms]
Nov 12 11:02:27.293: INFO: Created: latency-svc-2cz7w
Nov 12 11:02:27.340: INFO: Got endpoints: latency-svc-qw4x4 [750.130086ms]
Nov 12 11:02:27.343: INFO: Created: latency-svc-lkd6l
Nov 12 11:02:27.390: INFO: Got endpoints: latency-svc-xcvk6 [750.178303ms]
Nov 12 11:02:27.393: INFO: Created: latency-svc-bg59x
Nov 12 11:02:27.439: INFO: Got endpoints: latency-svc-m977z [749.715055ms]
Nov 12 11:02:27.443: INFO: Created: latency-svc-55gbv
Nov 12 11:02:27.490: INFO: Got endpoints: latency-svc-wmhgv [749.947894ms]
Nov 12 11:02:27.493: INFO: Created: latency-svc-qd6wj
Nov 12 11:02:27.539: INFO: Got endpoints: latency-svc-mq94c [749.555058ms]
Nov 12 11:02:27.542: INFO: Created: latency-svc-9mfxn
Nov 12 11:02:27.590: INFO: Got endpoints: latency-svc-fnvsk [749.999765ms]
Nov 12 11:02:27.593: INFO: Created: latency-svc-l28r5
Nov 12 11:02:27.640: INFO: Got endpoints: latency-svc-7hrf5 [749.7118ms]
Nov 12 11:02:27.643: INFO: Created: latency-svc-ms8t2
Nov 12 11:02:27.690: INFO: Got endpoints: latency-svc-cx6qx [749.988626ms]
Nov 12 11:02:27.693: INFO: Created: latency-svc-2489j
Nov 12 11:02:27.739: INFO: Got endpoints: latency-svc-wwgrd [749.698705ms]
Nov 12 11:02:27.742: INFO: Created: latency-svc-5tcvx
Nov 12 11:02:27.789: INFO: Got endpoints: latency-svc-9vds2 [749.898434ms]
Nov 12 11:02:27.792: INFO: Created: latency-svc-sxwhk
Nov 12 11:02:27.840: INFO: Got endpoints: latency-svc-lzhd6 [749.643547ms]
Nov 12 11:02:27.843: INFO: Created: latency-svc-hnj7g
Nov 12 11:02:27.889: INFO: Got endpoints: latency-svc-vthnj [749.831328ms]
Nov 12 11:02:27.893: INFO: Created: latency-svc-k77rp
Nov 12 11:02:27.939: INFO: Got endpoints: latency-svc-nzvsw [749.995334ms]
Nov 12 11:02:27.943: INFO: Created: latency-svc-hnmft
Nov 12 11:02:27.990: INFO: Got endpoints: latency-svc-gtjkb [750.050945ms]
Nov 12 11:02:27.993: INFO: Created: latency-svc-9t4wj
Nov 12 11:02:28.040: INFO: Got endpoints: latency-svc-2cz7w [749.549808ms]
Nov 12 11:02:28.043: INFO: Created: latency-svc-5zz64
Nov 12 11:02:28.089: INFO: Got endpoints: latency-svc-lkd6l [749.543673ms]
Nov 12 11:02:28.092: INFO: Created: latency-svc-lmhmd
Nov 12 11:02:28.140: INFO: Got endpoints: latency-svc-bg59x [749.808131ms]
Nov 12 11:02:28.143: INFO: Created: latency-svc-sjsx5
Nov 12 11:02:28.190: INFO: Got endpoints: latency-svc-55gbv [750.128383ms]
Nov 12 11:02:28.193: INFO: Created: latency-svc-llh7c
Nov 12 11:02:28.240: INFO: Got endpoints: latency-svc-qd6wj [750.19884ms]
Nov 12 11:02:28.243: INFO: Created: latency-svc-wzxc4
Nov 12 11:02:28.290: INFO: Got endpoints: latency-svc-9mfxn [750.078825ms]
Nov 12 11:02:28.293: INFO: Created: latency-svc-ntqzv
Nov 12 11:02:28.340: INFO: Got endpoints: latency-svc-l28r5 [749.921011ms]
Nov 12 11:02:28.343: INFO: Created: latency-svc-nbvbj
Nov 12 11:02:28.390: INFO: Got endpoints: latency-svc-ms8t2 [750.209805ms]
Nov 12 11:02:28.393: INFO: Created: latency-svc-nvrzz
Nov 12 11:02:28.439: INFO: Got endpoints: latency-svc-2489j [749.736825ms]
Nov 12 11:02:28.442: INFO: Created: latency-svc-bgmzr
Nov 12 11:02:28.490: INFO: Got endpoints: latency-svc-5tcvx [750.054993ms]
Nov 12 11:02:28.493: INFO: Created: latency-svc-rm69s
Nov 12 11:02:28.540: INFO: Got endpoints: latency-svc-sxwhk [750.110877ms]
Nov 12 11:02:28.542: INFO: Created: latency-svc-6bghk
Nov 12 11:02:28.590: INFO: Got endpoints: latency-svc-hnj7g [750.068753ms]
Nov 12 11:02:28.593: INFO: Created: latency-svc-drc4m
Nov 12 11:02:28.639: INFO: Got endpoints: latency-svc-k77rp [749.964619ms]
Nov 12 11:02:28.643: INFO: Created: latency-svc-4vvfc
Nov 12 11:02:28.690: INFO: Got endpoints: latency-svc-hnmft [750.284336ms]
Nov 12 11:02:28.693: INFO: Created: latency-svc-kzmcl
Nov 12 11:02:28.740: INFO: Got endpoints: latency-svc-9t4wj [750.138704ms]
Nov 12 11:02:28.743: INFO: Created: latency-svc-sccth
Nov 12 11:02:28.790: INFO: Got endpoints: latency-svc-5zz64 [749.96718ms]
Nov 12 11:02:28.793: INFO: Created: latency-svc-br5hg
Nov 12 11:02:28.840: INFO: Got endpoints: latency-svc-lmhmd [750.108124ms]
Nov 12 11:02:28.843: INFO: Created: latency-svc-mtp69
Nov 12 11:02:28.889: INFO: Got endpoints: latency-svc-sjsx5 [749.92426ms]
Nov 12 11:02:28.892: INFO: Created: latency-svc-ghtbn
Nov 12 11:02:28.940: INFO: Got endpoints: latency-svc-llh7c [749.823929ms]
Nov 12 11:02:28.943: INFO: Created: latency-svc-jtnzn
Nov 12 11:02:28.991: INFO: Got endpoints: latency-svc-wzxc4 [750.812661ms]
Nov 12 11:02:28.994: INFO: Created: latency-svc-fdvnb
Nov 12 11:02:29.040: INFO: Got endpoints: latency-svc-ntqzv [750.156969ms]
Nov 12 11:02:29.043: INFO: Created: latency-svc-qm48z
Nov 12 11:02:29.090: INFO: Got endpoints: latency-svc-nbvbj [750.149015ms]
Nov 12 11:02:29.093: INFO: Created: latency-svc-bgdct
Nov 12 11:02:29.140: INFO: Got endpoints: latency-svc-nvrzz [749.842831ms]
Nov 12 11:02:29.143: INFO: Created: latency-svc-wwqrx
Nov 12 11:02:29.190: INFO: Got endpoints: latency-svc-bgmzr [750.226325ms]
Nov 12 11:02:29.193: INFO: Created: latency-svc-j9szv
Nov 12 11:02:29.240: INFO: Got endpoints: latency-svc-rm69s [750.08215ms]
Nov 12 11:02:29.243: INFO: Created: latency-svc-sjfnx
Nov 12 11:02:29.290: INFO: Got endpoints: latency-svc-6bghk [750.70667ms]
Nov 12 11:02:29.294: INFO: Created: latency-svc-ppssr
Nov 12 11:02:29.339: INFO: Got endpoints: latency-svc-drc4m [749.663538ms]
Nov 12 11:02:29.342: INFO: Created: latency-svc-q7lrr
Nov 12 11:02:29.390: INFO: Got endpoints: latency-svc-4vvfc [750.522057ms]
Nov 12 11:02:29.393: INFO: Created: latency-svc-26bgc
Nov 12 11:02:29.440: INFO: Got endpoints: latency-svc-kzmcl [749.77107ms]
Nov 12 11:02:29.443: INFO: Created: latency-svc-nwkg7
Nov 12 11:02:29.490: INFO: Got endpoints: latency-svc-sccth [749.846043ms]
Nov 12 11:02:29.493: INFO: Created: latency-svc-nqphm
Nov 12 11:02:29.540: INFO: Got endpoints: latency-svc-br5hg [749.934536ms]
Nov 12 11:02:29.543: INFO: Created: latency-svc-hbnsn
Nov 12 11:02:29.590: INFO: Got endpoints: latency-svc-mtp69 [750.003702ms]
Nov 12 11:02:29.593: INFO: Created: latency-svc-s48s2
Nov 12 11:02:29.640: INFO: Got endpoints: latency-svc-ghtbn [750.266669ms]
Nov 12 11:02:29.643: INFO: Created: latency-svc-5qvnc
Nov 12 11:02:29.690: INFO: Got endpoints: latency-svc-jtnzn [750.216137ms]
Nov 12 11:02:29.693: INFO: Created: latency-svc-zwrlx
Nov 12 11:02:29.740: INFO: Got endpoints: latency-svc-fdvnb [748.937349ms]
Nov 12 11:02:29.743: INFO: Created: latency-svc-8nn84
Nov 12 11:02:29.789: INFO: Got endpoints: latency-svc-qm48z [749.628981ms]
Nov 12 11:02:29.792: INFO: Created: latency-svc-dg7hz
Nov 12 11:02:29.840: INFO: Got endpoints: latency-svc-bgdct [749.836719ms]
Nov 12 11:02:29.843: INFO: Created: latency-svc-vdtgm
Nov 12 11:02:29.890: INFO: Got endpoints: latency-svc-wwqrx [750.011812ms]
Nov 12 11:02:29.893: INFO: Created: latency-svc-fxw7k
Nov 12 11:02:29.939: INFO: Got endpoints: latency-svc-j9szv [749.755104ms]
Nov 12 11:02:29.942: INFO: Created: latency-svc-5l4bf
Nov 12 11:02:29.990: INFO: Got endpoints: latency-svc-sjfnx [750.28418ms]
Nov 12 11:02:29.993: INFO: Created: latency-svc-vc5cf
Nov 12 11:02:30.039: INFO: Got endpoints: latency-svc-ppssr [749.072164ms]
Nov 12 11:02:30.043: INFO: Created: latency-svc-m2wdx
Nov 12 11:02:30.090: INFO: Got endpoints: latency-svc-q7lrr [750.323196ms]
Nov 12 11:02:30.093: INFO: Created: latency-svc-7vwtz
Nov 12 11:02:30.140: INFO: Got endpoints: latency-svc-26bgc [749.711012ms]
Nov 12 11:02:30.143: INFO: Created: latency-svc-c6k4n
Nov 12 11:02:30.190: INFO: Got endpoints: latency-svc-nwkg7 [750.311196ms]
Nov 12 11:02:30.193: INFO: Created: latency-svc-xfz6c
Nov 12 11:02:30.240: INFO: Got endpoints: latency-svc-nqphm [749.825145ms]
Nov 12 11:02:30.243: INFO: Created: latency-svc-bj2xd
Nov 12 11:02:30.290: INFO: Got endpoints: latency-svc-hbnsn [750.392657ms]
Nov 12 11:02:30.294: INFO: Created: latency-svc-6k7cw
Nov 12 11:02:30.340: INFO: Got endpoints: latency-svc-s48s2 [750.033352ms]
Nov 12 11:02:30.390: INFO: Got endpoints: latency-svc-5qvnc [749.983466ms]
Nov 12 11:02:30.440: INFO: Got endpoints: latency-svc-zwrlx [749.964691ms]
Nov 12 11:02:30.490: INFO: Got endpoints: latency-svc-8nn84 [750.517044ms]
Nov 12 11:02:30.540: INFO: Got endpoints: latency-svc-dg7hz [750.363746ms]
Nov 12 11:02:30.592: INFO: Got endpoints: latency-svc-vdtgm [752.149441ms]
Nov 12 11:02:30.640: INFO: Got endpoints: latency-svc-fxw7k [749.815627ms]
Nov 12 11:02:30.690: INFO: Got endpoints: latency-svc-5l4bf [750.383849ms]
Nov 12 11:02:30.740: INFO: Got endpoints: latency-svc-vc5cf [749.569688ms]
Nov 12 11:02:30.790: INFO: Got endpoints: latency-svc-m2wdx [750.277002ms]
Nov 12 11:02:30.840: INFO: Got endpoints: latency-svc-7vwtz [749.951706ms]
Nov 12 11:02:30.890: INFO: Got endpoints: latency-svc-c6k4n [749.734844ms]
Nov 12 11:02:30.939: INFO: Got endpoints: latency-svc-xfz6c [749.414418ms]
Nov 12 11:02:30.990: INFO: Got endpoints: latency-svc-bj2xd [750.094398ms]
Nov 12 11:02:31.040: INFO: Got endpoints: latency-svc-6k7cw [749.618111ms]
Nov 12 11:02:31.040: INFO: Latencies: [5.066814ms 6.835837ms 7.764224ms 8.771416ms 10.46628ms 11.870212ms 13.202446ms 15.019734ms 15.999807ms 17.933219ms 18.930701ms 19.028314ms 19.156971ms 19.285111ms 19.354648ms 19.477748ms 19.627773ms 19.752103ms 19.757882ms 20.143807ms 20.246383ms 20.298497ms 20.466613ms 20.537573ms 20.583879ms 20.688494ms 21.802703ms 23.067168ms 23.522289ms 23.981247ms 72.574169ms 121.322192ms 170.163793ms 218.553624ms 267.345442ms 316.123344ms 364.416059ms 413.245099ms 463.335447ms 511.301814ms 559.910932ms 608.052353ms 657.036613ms 705.24321ms 748.157832ms 748.561903ms 748.710985ms 748.741348ms 748.937349ms 749.00122ms 749.072164ms 749.099148ms 749.414418ms 749.429913ms 749.433261ms 749.527996ms 749.543673ms 749.549808ms 749.555058ms 749.557254ms 749.565044ms 749.569688ms 749.580978ms 749.618111ms 749.628981ms 749.643547ms 749.656998ms 749.663538ms 749.668029ms 749.685574ms 749.694038ms 749.697417ms 749.6983ms 749.698705ms 749.711012ms 749.7118ms 749.715055ms 749.734844ms 749.736825ms 749.739397ms 749.739708ms 749.748598ms 749.755104ms 749.759791ms 749.762124ms 749.768856ms 749.770078ms 749.77107ms 749.781615ms 749.782835ms 749.784871ms 749.786848ms 749.791326ms 749.808131ms 749.815627ms 749.823929ms 749.825145ms 749.831328ms 749.836719ms 749.842831ms 749.846043ms 749.855768ms 749.875074ms 749.880114ms 749.898434ms 749.905007ms 749.905612ms 749.909513ms 749.916788ms 749.921011ms 749.921806ms 749.92426ms 749.930961ms 749.934536ms 749.935422ms 749.937822ms 749.947894ms 749.951706ms 749.954806ms 749.959176ms 749.964619ms 749.964691ms 749.96718ms 749.980457ms 749.98151ms 749.983466ms 749.988626ms 749.995334ms 749.997555ms 749.999765ms 750.003702ms 750.011812ms 750.021912ms 750.033352ms 750.034623ms 750.0364ms 750.036615ms 750.050945ms 750.051624ms 750.054993ms 750.05744ms 750.06503ms 750.067594ms 750.068753ms 750.078825ms 750.08215ms 750.094398ms 750.098058ms 750.100438ms 750.108124ms 750.110877ms 750.128383ms 750.130086ms 750.133837ms 750.138704ms 750.141617ms 750.149015ms 750.149332ms 750.156969ms 750.159604ms 750.170594ms 750.178303ms 750.188425ms 750.19884ms 750.209707ms 750.209805ms 750.216137ms 750.226325ms 750.228437ms 750.262265ms 750.266669ms 750.277002ms 750.28418ms 750.284336ms 750.286984ms 750.311196ms 750.323196ms 750.332141ms 750.342423ms 750.363746ms 750.383849ms 750.392657ms 750.398178ms 750.423159ms 750.437052ms 750.450752ms 750.468348ms 750.506716ms 750.517044ms 750.522057ms 750.55792ms 750.70667ms 750.777323ms 750.805651ms 750.812661ms 750.973933ms 751.000528ms 751.146093ms 751.645404ms 752.149441ms]
Nov 12 11:02:31.040: INFO: 50 %ile: 749.846043ms
Nov 12 11:02:31.040: INFO: 90 %ile: 750.383849ms
Nov 12 11:02:31.040: INFO: 99 %ile: 751.645404ms
Nov 12 11:02:31.040: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:02:31.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9349" for this suite.

• [SLOW TEST:18.754 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":186,"skipped":3097,"failed":0}
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:02:31.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:02:41.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4019" for this suite.

• [SLOW TEST:10.035 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3100,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:02:41.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:02:41.103: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df" in namespace "security-context-test-9913" to be "success or failure"
Nov 12 11:02:41.105: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019781ms
Nov 12 11:02:43.107: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004255744s
Nov 12 11:02:45.110: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006765036s
Nov 12 11:02:47.112: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008802032s
Nov 12 11:02:49.114: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011336484s
Nov 12 11:02:51.117: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013505743s
Nov 12 11:02:51.117: INFO: Pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df" satisfied condition "success or failure"
Nov 12 11:02:51.122: INFO: Got logs for pod "busybox-privileged-false-d6740397-bcb4-4eb0-be4e-294a7362a4df": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:02:51.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9913" for this suite.

• [SLOW TEST:10.045 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3102,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:02:51.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-4591
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4591 to expose endpoints map[]
Nov 12 11:02:51.150: INFO: Get endpoints failed (1.936653ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Nov 12 11:02:52.152: INFO: successfully validated that service multi-endpoint-test in namespace services-4591 exposes endpoints map[] (1.003995541s elapsed)
STEP: Creating pod pod1 in namespace services-4591
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4591 to expose endpoints map[pod1:[100]]
Nov 12 11:02:56.176: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.020088972s elapsed, will retry)
Nov 12 11:03:01.195: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.0391056s elapsed, will retry)
Nov 12 11:03:02.199: INFO: successfully validated that service multi-endpoint-test in namespace services-4591 exposes endpoints map[pod1:[100]] (10.043425848s elapsed)
STEP: Creating pod pod2 in namespace services-4591
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4591 to expose endpoints map[pod1:[100] pod2:[101]]
Nov 12 11:03:06.231: INFO: Unexpected endpoints: found map[d23a3271-b150-445c-b662-396e377aa863:[100]], expected map[pod1:[100] pod2:[101]] (4.029191846s elapsed, will retry)
Nov 12 11:03:11.260: INFO: Unexpected endpoints: found map[d23a3271-b150-445c-b662-396e377aa863:[100]], expected map[pod1:[100] pod2:[101]] (9.05781245s elapsed, will retry)
Nov 12 11:03:12.266: INFO: successfully validated that service multi-endpoint-test in namespace services-4591 exposes endpoints map[pod1:[100] pod2:[101]] (10.06422264s elapsed)
STEP: Deleting pod pod1 in namespace services-4591
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4591 to expose endpoints map[pod2:[101]]
Nov 12 11:03:12.273: INFO: successfully validated that service multi-endpoint-test in namespace services-4591 exposes endpoints map[pod2:[101]] (3.725974ms elapsed)
STEP: Deleting pod pod2 in namespace services-4591
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4591 to expose endpoints map[]
Nov 12 11:03:12.278: INFO: successfully validated that service multi-endpoint-test in namespace services-4591 exposes endpoints map[] (1.58034ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:03:12.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4591" for this suite.
[AfterEach] [sig-network] Services
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:21.162 seconds]
[sig-network] Services
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":189,"skipped":3127,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:03:12.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Nov 12 11:03:12.305: INFO: PodSpec: initContainers in spec.initContainers
Nov 12 11:04:04.537: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1f141952-1b47-4f46-b767-186653e9d2c4", GenerateName:"", Namespace:"init-container-8473", SelfLink:"/api/v1/namespaces/init-container-8473/pods/pod-init-1f141952-1b47-4f46-b767-186653e9d2c4", UID:"52210cf7-5944-4928-b3db-21f77a425b83", ResourceVersion:"25303", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63740775792, loc:(*time.Location)(0x7939680)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"305908148"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/networks-status":"[{\n    \"name\": \"default-cni-network\",\n    \"interface\": \"eth0\",\n    \"ips\": [\n        \"10.244.3.68\"\n    ],\n    \"mac\": \"0a:58:0a:f4:03:44\",\n    \"default\": true,\n    \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r89rm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006320000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r89rm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r89rm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r89rm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004026068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d74000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0040260f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004026110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004026118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00402611c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775792, loc:(*time.Location)(0x7939680)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775792, loc:(*time.Location)(0x7939680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775792, loc:(*time.Location)(0x7939680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740775792, loc:(*time.Location)(0x7939680)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.20.14", PodIP:"10.244.3.68", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.68"}}, StartTime:(*v1.Time)(0xc004d680a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001cde0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001cde150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://445899ac01406dc581b36827505409e06d8694a0da9e0fe3683b65ba6aa92564", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d680e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004d680c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00402619f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:04:04.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8473" for this suite.

• [SLOW TEST:52.253 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":190,"skipped":3129,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:04:04.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Nov 12 11:04:04.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2161'
Nov 12 11:04:04.677: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 12 11:04:04.677: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Nov 12 11:04:04.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2161'
Nov 12 11:04:04.816: INFO: stderr: ""
Nov 12 11:04:04.816: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:04:04.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2161" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":191,"skipped":3140,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:04:04.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3688.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3688.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 11:04:16.875: INFO: DNS probes using dns-3688/dns-test-481b1e27-7959-4459-ac3e-1e749b4a5786 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:04:16.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3688" for this suite.

• [SLOW TEST:12.056 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":192,"skipped":3146,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:04:16.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-859
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 12 11:04:16.899: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Nov 12 11:04:56.949: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostname&protocol=http&host=10.244.1.84&port=8080&tries=1'] Namespace:pod-network-test-859 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:04:56.949: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:04:57.066: INFO: Waiting for responses: map[]
Nov 12 11:04:57.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostname&protocol=http&host=10.244.3.69&port=8080&tries=1'] Namespace:pod-network-test-859 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:04:57.067: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:04:57.168: INFO: Waiting for responses: map[]
Nov 12 11:04:57.170: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostname&protocol=http&host=10.244.2.91&port=8080&tries=1'] Namespace:pod-network-test-859 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:04:57.170: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:04:57.265: INFO: Waiting for responses: map[]
Nov 12 11:04:57.268: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.85:8080/dial?request=hostname&protocol=http&host=10.244.4.104&port=8080&tries=1'] Namespace:pod-network-test-859 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:04:57.268: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:04:57.360: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:04:57.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-859" for this suite.

• [SLOW TEST:40.480 seconds]
[sig-network] Networking
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3161,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:04:57.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:04:57.394: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Nov 12 11:04:57.397: INFO: Number of nodes with available pods: 0
Nov 12 11:04:57.397: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Nov 12 11:04:57.405: INFO: Number of nodes with available pods: 0
Nov 12 11:04:57.405: INFO: Node node1 is running more than one daemon pod
Nov 12 11:04:58.408: INFO: Number of nodes with available pods: 0
Nov 12 11:04:58.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:04:59.408: INFO: Number of nodes with available pods: 0
Nov 12 11:04:59.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:00.409: INFO: Number of nodes with available pods: 0
Nov 12 11:05:00.409: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:01.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:01.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:02.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:02.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:03.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:03.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:04.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:04.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:05.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:05.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:06.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:06.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:07.408: INFO: Number of nodes with available pods: 0
Nov 12 11:05:07.408: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:08.408: INFO: Number of nodes with available pods: 1
Nov 12 11:05:08.408: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Nov 12 11:05:08.417: INFO: Number of nodes with available pods: 1
Nov 12 11:05:08.417: INFO: Number of running nodes: 0, number of available pods: 1
Nov 12 11:05:09.420: INFO: Number of nodes with available pods: 0
Nov 12 11:05:09.420: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Nov 12 11:05:09.428: INFO: Number of nodes with available pods: 0
Nov 12 11:05:09.428: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:10.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:10.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:11.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:11.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:12.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:12.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:13.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:13.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:14.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:14.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:15.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:15.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:16.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:16.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:17.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:17.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:18.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:18.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:19.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:19.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:20.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:20.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:21.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:21.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:22.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:22.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:23.431: INFO: Number of nodes with available pods: 0
Nov 12 11:05:23.431: INFO: Node node1 is running more than one daemon pod
Nov 12 11:05:24.431: INFO: Number of nodes with available pods: 1
Nov 12 11:05:24.431: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2166, will wait for the garbage collector to delete the pods
Nov 12 11:05:24.491: INFO: Deleting DaemonSet.extensions daemon-set took: 3.765122ms
Nov 12 11:05:24.791: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.257116ms
Nov 12 11:05:38.793: INFO: Number of nodes with available pods: 0
Nov 12 11:05:38.793: INFO: Number of running nodes: 0, number of available pods: 0
Nov 12 11:05:38.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2166/daemonsets","resourceVersion":"25748"},"items":null}

Nov 12 11:05:38.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2166/pods","resourceVersion":"25748"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:05:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2166" for this suite.

• [SLOW TEST:41.453 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":194,"skipped":3173,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:05:38.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:05:43.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1849" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":195,"skipped":3185,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:05:43.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 11:05:53.455: INFO: DNS probes using dns-test-2af402e9-0b49-4498-8841-9d69af490bbb succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 11:06:03.478: INFO: DNS probes using dns-test-b2870305-2cad-4920-a986-a5dc33fc608a succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8056.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8056.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 11:06:25.504: INFO: DNS probes using dns-test-e0e65e4c-c3fe-4d4f-99a8-89b6f8d4b0bd succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:06:25.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8056" for this suite.

• [SLOW TEST:42.096 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":196,"skipped":3186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:06:25.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 12 11:06:35.568: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:06:35.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6521" for this suite.

• [SLOW TEST:10.058 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3222,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:06:35.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Nov 12 11:06:35.600: INFO: Waiting up to 5m0s for pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5" in namespace "emptydir-4599" to be "success or failure"
Nov 12 11:06:35.602: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119022ms
Nov 12 11:06:37.604: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004221192s
Nov 12 11:06:39.607: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007043938s
Nov 12 11:06:41.609: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009555424s
Nov 12 11:06:43.612: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012098657s
Nov 12 11:06:45.616: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016260859s
STEP: Saw pod success
Nov 12 11:06:45.616: INFO: Pod "pod-c7312c16-8f22-406c-b60a-281123e8fdc5" satisfied condition "success or failure"
Nov 12 11:06:45.618: INFO: Trying to get logs from node node2 pod pod-c7312c16-8f22-406c-b60a-281123e8fdc5 container test-container: 
STEP: delete the pod
Nov 12 11:06:45.638: INFO: Waiting for pod pod-c7312c16-8f22-406c-b60a-281123e8fdc5 to disappear
Nov 12 11:06:45.639: INFO: Pod pod-c7312c16-8f22-406c-b60a-281123e8fdc5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:06:45.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4599" for this suite.

• [SLOW TEST:10.063 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3226,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:06:45.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:06:45.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:06:55.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2339" for this suite.

• [SLOW TEST:10.138 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3249,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:06:55.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-550f6a6b-60aa-4e37-8f4a-ef591199d53a
STEP: Creating a pod to test consume configMaps
Nov 12 11:06:55.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0" in namespace "projected-1224" to be "success or failure"
Nov 12 11:06:55.806: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.640727ms
Nov 12 11:06:57.809: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004024437s
Nov 12 11:06:59.811: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006707263s
Nov 12 11:07:01.814: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009239468s
Nov 12 11:07:03.817: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012084141s
Nov 12 11:07:05.820: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015150396s
STEP: Saw pod success
Nov 12 11:07:05.820: INFO: Pod "pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0" satisfied condition "success or failure"
Nov 12 11:07:05.822: INFO: Trying to get logs from node node4 pod pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0 container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 11:07:06.148: INFO: Waiting for pod pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0 to disappear
Nov 12 11:07:06.150: INFO: Pod pod-projected-configmaps-cc94e30b-eaf6-4c0f-9195-5f04d76840e0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:07:06.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1224" for this suite.

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:07:06.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:07:06.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3" in namespace "projected-7905" to be "success or failure"
Nov 12 11:07:06.179: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 1.694565ms
Nov 12 11:07:08.182: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004672364s
Nov 12 11:07:10.185: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007192238s
Nov 12 11:07:12.187: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00971934s
Nov 12 11:07:14.190: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012600229s
Nov 12 11:07:16.192: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014994843s
Nov 12 11:07:18.195: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017502736s
STEP: Saw pod success
Nov 12 11:07:18.195: INFO: Pod "downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3" satisfied condition "success or failure"
Nov 12 11:07:18.197: INFO: Trying to get logs from node node3 pod downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3 container client-container: 
STEP: delete the pod
Nov 12 11:07:18.214: INFO: Waiting for pod downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3 to disappear
Nov 12 11:07:18.216: INFO: Pod downwardapi-volume-5d87d1a8-8a36-4c75-ac87-b6b96027c1f3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:07:18.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7905" for this suite.

• [SLOW TEST:12.065 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3279,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:07:18.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:07:25.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9076" for this suite.

• [SLOW TEST:7.028 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":202,"skipped":3331,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:07:25.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-60ebb132-f6dc-413f-8271-a73552ea3215
STEP: Creating a pod to test consume secrets
Nov 12 11:07:25.272: INFO: Waiting up to 5m0s for pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f" in namespace "secrets-9889" to be "success or failure"
Nov 12 11:07:25.274: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.657036ms
Nov 12 11:07:27.276: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00429472s
Nov 12 11:07:29.279: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006898994s
Nov 12 11:07:31.282: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009758378s
Nov 12 11:07:33.284: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012247853s
Nov 12 11:07:35.287: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014769975s
Nov 12 11:07:37.290: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017739505s
STEP: Saw pod success
Nov 12 11:07:37.290: INFO: Pod "pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f" satisfied condition "success or failure"
Nov 12 11:07:37.292: INFO: Trying to get logs from node node1 pod pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f container secret-volume-test: 
STEP: delete the pod
Nov 12 11:07:37.310: INFO: Waiting for pod pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f to disappear
Nov 12 11:07:37.312: INFO: Pod pod-secrets-130b9e1c-85b0-4f7f-b7e7-21beb1c3d27f no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:07:37.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9889" for this suite.

• [SLOW TEST:12.068 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3339,"failed":0}
SS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:07:37.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:07:37.337: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31" in namespace "security-context-test-7546" to be "success or failure"
Nov 12 11:07:37.338: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 1.407238ms
Nov 12 11:07:39.342: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004940543s
Nov 12 11:07:41.345: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007938978s
Nov 12 11:07:43.347: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010623008s
Nov 12 11:07:45.350: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013253187s
Nov 12 11:07:47.354: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 10.017658603s
Nov 12 11:07:49.358: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Pending", Reason="", readiness=false. Elapsed: 12.020918363s
Nov 12 11:07:51.361: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.024258943s
Nov 12 11:07:51.361: INFO: Pod "alpine-nnp-false-852d1a4c-ce29-43c2-9123-c41c9c323a31" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:07:51.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7546" for this suite.

• [SLOW TEST:14.054 seconds]
[k8s.io] Security Context
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3341,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:07:51.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:08:03.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7175" for this suite.

• [SLOW TEST:12.040 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:08:03.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:08:03.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6341" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":206,"skipped":3389,"failed":0}
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:08:03.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Nov 12 11:08:03.451: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:08:14.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4395" for this suite.

• [SLOW TEST:11.318 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":207,"skipped":3393,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:08:14.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Nov 12 11:08:14.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:08:30.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3126" for this suite.

• [SLOW TEST:15.948 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":208,"skipped":3410,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:08:30.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3011, will wait for the garbage collector to delete the pods
Nov 12 11:08:40.782: INFO: Deleting Job.batch foo took: 3.723193ms
Nov 12 11:08:41.082: INFO: Terminating Job.batch foo pods took: 300.275052ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:09:18.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3011" for this suite.

• [SLOW TEST:48.286 seconds]
[sig-apps] Job
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":209,"skipped":3421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:09:18.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6769
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Nov 12 11:09:19.014: INFO: Found 0 stateful pods, waiting for 3
Nov 12 11:09:29.017: INFO: Found 2 stateful pods, waiting for 3
Nov 12 11:09:39.017: INFO: Found 2 stateful pods, waiting for 3
Nov 12 11:09:49.018: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:09:49.018: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:09:49.018: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Nov 12 11:09:59.017: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:09:59.017: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:09:59.017: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:09:59.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6769 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:09:59.359: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:09:59.359: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:09:59.359: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Nov 12 11:10:09.402: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Nov 12 11:10:19.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6769 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 11:10:19.670: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 11:10:19.670: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 11:10:19.670: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 11:10:29.683: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:10:29.683: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:10:29.683: INFO: Waiting for Pod statefulset-6769/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:10:39.687: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:10:39.688: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:10:39.688: INFO: Waiting for Pod statefulset-6769/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:10:49.688: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:10:49.688: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:10:59.687: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:10:59.688: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Nov 12 11:11:09.688: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
STEP: Rolling back to a previous revision
Nov 12 11:11:19.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6769 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:11:19.948: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:11:19.948: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:11:19.948: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 11:11:29.973: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Nov 12 11:11:39.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6769 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 11:11:40.246: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 11:11:40.247: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 11:11:40.248: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 11:11:50.263: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:11:50.263: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:11:50.263: INFO: Waiting for Pod statefulset-6769/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:12:00.269: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:12:00.269: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:12:00.269: INFO: Waiting for Pod statefulset-6769/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:12:10.269: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:12:10.269: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:12:20.268: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
Nov 12 11:12:20.268: INFO: Waiting for Pod statefulset-6769/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Nov 12 11:12:30.268: INFO: Waiting for StatefulSet statefulset-6769/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 11:12:40.268: INFO: Deleting all statefulset in ns statefulset-6769
Nov 12 11:12:40.270: INFO: Scaling statefulset ss2 to 0
Nov 12 11:13:00.278: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:13:00.280: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:13:00.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6769" for this suite.

• [SLOW TEST:221.300 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":210,"skipped":3479,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:13:00.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:13:00.310: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Nov 12 11:13:05.313: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 12 11:13:11.318: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Nov 12 11:13:21.335: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-688 /apis/apps/v1/namespaces/deployment-688/deployments/test-cleanup-deployment 6beb62d3-0898-4b1e-b5ea-607723cf3d78 27800 1 2020-11-12 11:13:11 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004baa9e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-12 11:13:11 +0000 UTC,LastTransitionTime:2020-11-12 11:13:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-11-12 11:13:21 +0000 UTC,LastTransitionTime:2020-11-12 11:13:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Nov 12 11:13:21.337: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-688 /apis/apps/v1/namespaces/deployment-688/replicasets/test-cleanup-deployment-55ffc6b7b6 ea0f5587-e562-430f-84d9-8d9cd09aa273 27789 1 2020-11-12 11:13:11 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6beb62d3-0898-4b1e-b5ea-607723cf3d78 0xc004baadc7 0xc004baadc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004baae48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Nov 12 11:13:21.339: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-pmjwb" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-pmjwb test-cleanup-deployment-55ffc6b7b6- deployment-688 /api/v1/namespaces/deployment-688/pods/test-cleanup-deployment-55ffc6b7b6-pmjwb 64bd9cef-9d60-4584-a740-efb2cbf93cf7 27788 0 2020-11-12 11:13:11 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[k8s.v1.cni.cncf.io/networks-status:[{
    "name": "default-cni-network",
    "interface": "eth0",
    "ips": [
        "10.244.1.95"
    ],
    "mac": "0a:58:0a:f4:01:5f",
    "default": true,
    "dns": {}
}]] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 ea0f5587-e562-430f-84d9-8d9cd09aa273 0xc004bab1e7 0xc004bab1e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9vhst,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9vhst,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9vhst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 11:13:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 11:13:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 11:13:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-12 11:13:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.20.13,PodIP:10.244.1.95,StartTime:2020-11-12 11:13:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-12 11:13:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://289fd531a3ce515d77f59270b5e8a8b3612a4347a763006586e0156412a46e16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:13:21.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-688" for this suite.

• [SLOW TEST:21.054 seconds]
[sig-apps] Deployment
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":211,"skipped":3483,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:13:21.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:13:21.378: INFO: Create a RollingUpdate DaemonSet
Nov 12 11:13:21.380: INFO: Check that daemon pods launch on every node of the cluster
Nov 12 11:13:21.383: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:21.384: INFO: Number of nodes with available pods: 0
Nov 12 11:13:21.384: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:22.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:22.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:22.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:23.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:23.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:23.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:24.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:24.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:24.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:25.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:25.390: INFO: Number of nodes with available pods: 0
Nov 12 11:13:25.390: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:26.387: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:26.390: INFO: Number of nodes with available pods: 0
Nov 12 11:13:26.390: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:27.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:27.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:27.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:28.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:28.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:28.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:29.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:29.391: INFO: Number of nodes with available pods: 0
Nov 12 11:13:29.391: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:30.387: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:30.389: INFO: Number of nodes with available pods: 0
Nov 12 11:13:30.390: INFO: Node node1 is running more than one daemon pod
Nov 12 11:13:31.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:31.390: INFO: Number of nodes with available pods: 4
Nov 12 11:13:31.390: INFO: Number of running nodes: 4, number of available pods: 4
Nov 12 11:13:31.390: INFO: Update the DaemonSet to trigger a rollout
Nov 12 11:13:31.394: INFO: Updating DaemonSet daemon-set
Nov 12 11:13:39.408: INFO: Roll back the DaemonSet before rollout is complete
Nov 12 11:13:39.412: INFO: Updating DaemonSet daemon-set
Nov 12 11:13:39.412: INFO: Make sure DaemonSet rollback is complete
Nov 12 11:13:39.414: INFO: Wrong image for pod: daemon-set-vvgkt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Nov 12 11:13:39.414: INFO: Pod daemon-set-vvgkt is not available
Nov 12 11:13:39.417: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:40.419: INFO: Wrong image for pod: daemon-set-vvgkt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Nov 12 11:13:40.419: INFO: Pod daemon-set-vvgkt is not available
Nov 12 11:13:40.422: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:41.420: INFO: Wrong image for pod: daemon-set-vvgkt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Nov 12 11:13:41.420: INFO: Pod daemon-set-vvgkt is not available
Nov 12 11:13:41.423: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:13:42.421: INFO: Pod daemon-set-4jj77 is not available
Nov 12 11:13:42.425: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2685, will wait for the garbage collector to delete the pods
Nov 12 11:13:42.487: INFO: Deleting DaemonSet.extensions daemon-set took: 3.702932ms
Nov 12 11:13:42.787: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241412ms
Nov 12 11:13:49.190: INFO: Number of nodes with available pods: 0
Nov 12 11:13:49.190: INFO: Number of running nodes: 0, number of available pods: 0
Nov 12 11:13:49.192: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2685/daemonsets","resourceVersion":"27996"},"items":null}

Nov 12 11:13:49.193: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2685/pods","resourceVersion":"27996"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:13:49.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2685" for this suite.

• [SLOW TEST:27.864 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":212,"skipped":3502,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:13:49.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-a1ad5530-e063-4a80-a19f-5e3cfe48b284
STEP: Creating a pod to test consume secrets
Nov 12 11:13:49.231: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe" in namespace "projected-8585" to be "success or failure"
Nov 12 11:13:49.232: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Pending", Reason="", readiness=false. Elapsed: 1.531601ms
Nov 12 11:13:51.235: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004372568s
Nov 12 11:13:53.238: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006927582s
Nov 12 11:13:55.240: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009488064s
Nov 12 11:13:57.243: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012053109s
Nov 12 11:13:59.246: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014905021s
STEP: Saw pod success
Nov 12 11:13:59.246: INFO: Pod "pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe" satisfied condition "success or failure"
Nov 12 11:13:59.248: INFO: Trying to get logs from node node2 pod pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe container projected-secret-volume-test: 
STEP: delete the pod
Nov 12 11:13:59.269: INFO: Waiting for pod pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe to disappear
Nov 12 11:13:59.271: INFO: Pod pod-projected-secrets-5d2738e4-98e7-4b91-812e-48867d6949fe no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:13:59.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8585" for this suite.

• [SLOW TEST:10.066 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3523,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:13:59.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7458
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7458
STEP: Creating statefulset with conflicting port in namespace statefulset-7458
STEP: Waiting until pod test-pod will start running in namespace statefulset-7458
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7458
Nov 12 11:14:11.307: INFO: Observed stateful pod in namespace: statefulset-7458, name: ss-0, uid: 659f2403-c47b-4bbb-b6c5-24ff231615cd, status phase: Failed. Waiting for statefulset controller to delete.
Nov 12 11:14:11.307: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7458
STEP: Removing pod with conflicting port in namespace statefulset-7458
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7458 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 11:14:23.330: INFO: Deleting all statefulset in ns statefulset-7458
Nov 12 11:14:23.332: INFO: Scaling statefulset ss to 0
Nov 12 11:14:33.342: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:14:33.344: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:14:33.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7458" for this suite.

• [SLOW TEST:34.083 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":214,"skipped":3537,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:14:33.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Nov 12 11:14:33.379: INFO: Pod name pod-release: Found 0 pods out of 1
Nov 12 11:14:38.385: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:14:39.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1137" for this suite.

• [SLOW TEST:6.041 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":215,"skipped":3545,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:14:39.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-8743d8ab-0b28-4992-9a16-e805167d5aa9 in namespace container-probe-4886
Nov 12 11:14:51.424: INFO: Started pod test-webserver-8743d8ab-0b28-4992-9a16-e805167d5aa9 in namespace container-probe-4886
STEP: checking the pod's current state and verifying that restartCount is present
Nov 12 11:14:51.426: INFO: Initial restart count of pod test-webserver-8743d8ab-0b28-4992-9a16-e805167d5aa9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:18:51.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4886" for this suite.

• [SLOW TEST:252.363 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3567,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:18:51.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Nov 12 11:18:51.783: INFO: Waiting up to 5m0s for pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c" in namespace "emptydir-8257" to be "success or failure"
Nov 12 11:18:51.784: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.453978ms
Nov 12 11:18:53.787: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004137428s
Nov 12 11:18:55.790: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006809893s
Nov 12 11:18:57.792: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009334046s
Nov 12 11:18:59.795: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011902992s
Nov 12 11:19:01.797: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013930649s
STEP: Saw pod success
Nov 12 11:19:01.797: INFO: Pod "pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c" satisfied condition "success or failure"
Nov 12 11:19:01.798: INFO: Trying to get logs from node node3 pod pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c container test-container: 
STEP: delete the pod
Nov 12 11:19:01.814: INFO: Waiting for pod pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c to disappear
Nov 12 11:19:01.816: INFO: Pod pod-6a431d16-88cd-4793-8694-1a18d4a3ba4c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:19:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8257" for this suite.

• [SLOW TEST:10.055 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3567,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:19:01.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Nov 12 11:19:01.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Nov 12 11:19:01.938: INFO: stderr: ""
Nov 12 11:19:01.938: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.0.20.12:6443\x1b[0m\n\x1b[0;32mcoredns\x1b[0m is running at \x1b[0;33mhttps://10.0.20.12:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://10.0.20.12:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\x1b[0m\n\x1b[0;32mKubeRegistry\x1b[0m is running at \x1b[0;33mhttps://10.0.20.12:6443/api/v1/namespaces/kube-system/services/registry:registry/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:19:01.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-496" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":218,"skipped":3574,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:19:01.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:19:38.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8836" for this suite.
STEP: Destroying namespace "nsdeletetest-5230" for this suite.
Nov 12 11:19:39.002: INFO: Namespace nsdeletetest-5230 was already deleted
STEP: Destroying namespace "nsdeletetest-6463" for this suite.

• [SLOW TEST:37.057 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":219,"skipped":3578,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:19:39.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-b9rp
STEP: Creating a pod to test atomic-volume-subpath
Nov 12 11:19:39.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b9rp" in namespace "subpath-5624" to be "success or failure"
Nov 12 11:19:39.026: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Pending", Reason="", readiness=false. Elapsed: 1.496397ms
Nov 12 11:19:41.030: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004945705s
Nov 12 11:19:43.033: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007653936s
Nov 12 11:19:45.036: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011135899s
Nov 12 11:19:47.039: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01358489s
Nov 12 11:19:49.041: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 10.015728508s
Nov 12 11:19:51.043: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 12.018134914s
Nov 12 11:19:53.046: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 14.02057472s
Nov 12 11:19:55.050: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 16.024582643s
Nov 12 11:19:57.052: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 18.027117868s
Nov 12 11:19:59.054: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 20.029465265s
Nov 12 11:20:01.057: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 22.031611978s
Nov 12 11:20:03.059: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 24.034061565s
Nov 12 11:20:05.063: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 26.03759258s
Nov 12 11:20:07.065: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 28.03995547s
Nov 12 11:20:09.068: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Running", Reason="", readiness=true. Elapsed: 30.042820458s
Nov 12 11:20:11.070: INFO: Pod "pod-subpath-test-configmap-b9rp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.045423827s
STEP: Saw pod success
Nov 12 11:20:11.070: INFO: Pod "pod-subpath-test-configmap-b9rp" satisfied condition "success or failure"
Nov 12 11:20:11.072: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-b9rp container test-container-subpath-configmap-b9rp: 
STEP: delete the pod
Nov 12 11:20:11.091: INFO: Waiting for pod pod-subpath-test-configmap-b9rp to disappear
Nov 12 11:20:11.092: INFO: Pod pod-subpath-test-configmap-b9rp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-b9rp
Nov 12 11:20:11.092: INFO: Deleting pod "pod-subpath-test-configmap-b9rp" in namespace "subpath-5624"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5624" for this suite.

• [SLOW TEST:32.094 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":220,"skipped":3581,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:11.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-6309993b-7f00-4936-b46a-73d44fe0beb6
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:11.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5346" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":221,"skipped":3588,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:11.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Nov 12 11:20:23.655: INFO: Successfully updated pod "labelsupdate637c2031-1162-4bae-9825-e3f93d8b985e"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:25.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2677" for this suite.

• [SLOW TEST:14.553 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3604,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:25.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:25.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8736" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":223,"skipped":3609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:25.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:25.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-7148" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":224,"skipped":3638,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:25.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:35.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2384" for this suite.

• [SLOW TEST:10.037 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3677,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:35.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-065c5dff-0ecc-4a1a-9415-2aea6814a65a
STEP: Creating a pod to test consume secrets
Nov 12 11:20:35.809: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a" in namespace "projected-8993" to be "success or failure"
Nov 12 11:20:35.810: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.515326ms
Nov 12 11:20:37.813: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003846757s
Nov 12 11:20:39.815: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006607344s
Nov 12 11:20:41.818: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008884271s
Nov 12 11:20:43.821: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011843226s
Nov 12 11:20:45.823: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014247834s
STEP: Saw pod success
Nov 12 11:20:45.823: INFO: Pod "pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a" satisfied condition "success or failure"
Nov 12 11:20:45.825: INFO: Trying to get logs from node node3 pod pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a container projected-secret-volume-test: 
STEP: delete the pod
Nov 12 11:20:45.835: INFO: Waiting for pod pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a to disappear
Nov 12 11:20:45.836: INFO: Pod pod-projected-secrets-5dd184ce-efb6-4d70-b00f-6b8e9f9adc1a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:45.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8993" for this suite.

• [SLOW TEST:10.050 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3680,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:45.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:182
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:45.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9873" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":227,"skipped":3686,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:45.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:20:46.520: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:20:48.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:20:50.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:20:52.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:20:54.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740776846, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:20:57.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:20:57.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-58" for this suite.
STEP: Destroying namespace "webhook-58-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.779 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":228,"skipped":3725,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:20:57.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Nov 12 11:21:19.681: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:19.682: INFO: Pod pod-with-prestop-http-hook still exists
Nov 12 11:21:21.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:21.685: INFO: Pod pod-with-prestop-http-hook still exists
Nov 12 11:21:23.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:23.686: INFO: Pod pod-with-prestop-http-hook still exists
Nov 12 11:21:25.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:25.685: INFO: Pod pod-with-prestop-http-hook still exists
Nov 12 11:21:27.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:27.686: INFO: Pod pod-with-prestop-http-hook still exists
Nov 12 11:21:29.683: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 12 11:21:29.685: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:21:29.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7762" for this suite.

• [SLOW TEST:32.059 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3730,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:21:29.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:21:29.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70" in namespace "projected-2847" to be "success or failure"
Nov 12 11:21:29.727: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 1.527449ms
Nov 12 11:21:31.729: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003791329s
Nov 12 11:21:33.732: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006506152s
Nov 12 11:21:35.735: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009192277s
Nov 12 11:21:37.738: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012011611s
Nov 12 11:21:39.740: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014954796s
Nov 12 11:21:41.744: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017983315s
STEP: Saw pod success
Nov 12 11:21:41.744: INFO: Pod "downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70" satisfied condition "success or failure"
Nov 12 11:21:41.746: INFO: Trying to get logs from node node3 pod downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70 container client-container: 
STEP: delete the pod
Nov 12 11:21:41.756: INFO: Waiting for pod downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70 to disappear
Nov 12 11:21:41.758: INFO: Pod downwardapi-volume-4ed77eef-e590-486b-ac61-3d689c879c70 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:21:41.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2847" for this suite.

• [SLOW TEST:12.058 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3733,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:21:41.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4635
[It] should have a working scale subresource [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-4635
Nov 12 11:21:41.784: INFO: Found 0 stateful pods, waiting for 1
Nov 12 11:21:51.788: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Nov 12 11:22:01.788: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 11:22:01.800: INFO: Deleting all statefulset in ns statefulset-4635
Nov 12 11:22:01.802: INFO: Scaling statefulset ss to 0
Nov 12 11:22:11.810: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:22:11.812: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:22:11.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4635" for this suite.

• [SLOW TEST:30.061 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":231,"skipped":3738,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:22:11.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:50
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Nov 12 11:22:21.855: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Nov 12 11:22:31.969: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:22:31.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7836" for this suite.

• [SLOW TEST:20.157 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":232,"skipped":3755,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:22:31.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-62xx
STEP: Creating a pod to test atomic-volume-subpath
Nov 12 11:22:32.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-62xx" in namespace "subpath-1445" to be "success or failure"
Nov 12 11:22:32.009: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Pending", Reason="", readiness=false. Elapsed: 1.773749ms
Nov 12 11:22:34.012: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004440726s
Nov 12 11:22:36.014: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006776243s
Nov 12 11:22:38.020: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012527417s
Nov 12 11:22:40.023: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015168125s
Nov 12 11:22:42.025: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 10.017824071s
Nov 12 11:22:44.028: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 12.020748331s
Nov 12 11:22:46.031: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 14.023284229s
Nov 12 11:22:48.034: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 16.026350175s
Nov 12 11:22:50.037: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 18.029308209s
Nov 12 11:22:52.040: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 20.032983311s
Nov 12 11:22:54.043: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 22.035658775s
Nov 12 11:22:56.046: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 24.038183681s
Nov 12 11:22:58.048: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 26.040383558s
Nov 12 11:23:00.050: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Running", Reason="", readiness=true. Elapsed: 28.043083195s
Nov 12 11:23:02.054: INFO: Pod "pod-subpath-test-configmap-62xx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.046193103s
STEP: Saw pod success
Nov 12 11:23:02.054: INFO: Pod "pod-subpath-test-configmap-62xx" satisfied condition "success or failure"
Nov 12 11:23:02.056: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-62xx container test-container-subpath-configmap-62xx: 
STEP: delete the pod
Nov 12 11:23:02.073: INFO: Waiting for pod pod-subpath-test-configmap-62xx to disappear
Nov 12 11:23:02.074: INFO: Pod pod-subpath-test-configmap-62xx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-62xx
Nov 12 11:23:02.074: INFO: Deleting pod "pod-subpath-test-configmap-62xx" in namespace "subpath-1445"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:23:02.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1445" for this suite.

• [SLOW TEST:30.099 seconds]
[sig-storage] Subpath
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":233,"skipped":3759,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:23:02.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Nov 12 11:23:02.103: INFO: Waiting up to 5m0s for pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0" in namespace "downward-api-8482" to be "success or failure"
Nov 12 11:23:02.105: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.555382ms
Nov 12 11:23:04.107: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004235493s
Nov 12 11:23:06.110: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006860847s
Nov 12 11:23:08.112: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009260135s
Nov 12 11:23:10.115: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011993578s
Nov 12 11:23:12.118: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015456133s
STEP: Saw pod success
Nov 12 11:23:12.119: INFO: Pod "downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0" satisfied condition "success or failure"
Nov 12 11:23:12.120: INFO: Trying to get logs from node node3 pod downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0 container dapi-container: 
STEP: delete the pod
Nov 12 11:23:12.139: INFO: Waiting for pod downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0 to disappear
Nov 12 11:23:12.140: INFO: Pod downward-api-3b3ee398-a80f-43ab-8b2f-74257825e4c0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:23:12.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8482" for this suite.

• [SLOW TEST:10.065 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3791,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:23:12.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2573
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 12 11:23:12.164: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Nov 12 11:23:52.220: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.82:8080/dial?request=hostname&protocol=udp&host=10.244.1.103&port=8081&tries=1'] Namespace:pod-network-test-2573 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:23:52.220: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:23:52.338: INFO: Waiting for responses: map[]
Nov 12 11:23:52.339: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.82:8080/dial?request=hostname&protocol=udp&host=10.244.3.81&port=8081&tries=1'] Namespace:pod-network-test-2573 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:23:52.339: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:23:52.447: INFO: Waiting for responses: map[]
Nov 12 11:23:52.449: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.82:8080/dial?request=hostname&protocol=udp&host=10.244.2.107&port=8081&tries=1'] Namespace:pod-network-test-2573 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:23:52.449: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:23:52.547: INFO: Waiting for responses: map[]
Nov 12 11:23:52.549: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.82:8080/dial?request=hostname&protocol=udp&host=10.244.4.113&port=8081&tries=1'] Namespace:pod-network-test-2573 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:23:52.549: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:23:52.645: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:23:52.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2573" for this suite.

• [SLOW TEST:40.502 seconds]
[sig-network] Networking
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3812,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:23:52.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-6aba15e0-015b-4fb8-a295-0106aa8c969c
STEP: Creating a pod to test consume secrets
Nov 12 11:23:52.669: INFO: Waiting up to 5m0s for pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9" in namespace "secrets-2222" to be "success or failure"
Nov 12 11:23:52.670: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Pending", Reason="", readiness=false. Elapsed: 1.485043ms
Nov 12 11:23:54.674: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004634236s
Nov 12 11:23:56.677: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007568233s
Nov 12 11:23:58.679: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01019166s
Nov 12 11:24:00.682: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012661923s
Nov 12 11:24:02.684: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014893619s
STEP: Saw pod success
Nov 12 11:24:02.684: INFO: Pod "pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9" satisfied condition "success or failure"
Nov 12 11:24:02.686: INFO: Trying to get logs from node node1 pod pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9 container secret-volume-test: 
STEP: delete the pod
Nov 12 11:24:02.695: INFO: Waiting for pod pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9 to disappear
Nov 12 11:24:02.697: INFO: Pod pod-secrets-8e8b7c2b-7386-40c4-98ea-08ec1d6e56c9 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:24:02.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2222" for this suite.

• [SLOW TEST:10.053 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3814,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:24:02.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:24:03.245: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:24:05.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:24:07.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:24:09.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:24:11.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:24:13.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777043, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:24:16.258: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:24:16.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5578" for this suite.
STEP: Destroying namespace "webhook-5578-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.601 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":237,"skipped":3814,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:24:16.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:24:16.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5" in namespace "projected-5437" to be "success or failure"
Nov 12 11:24:16.324: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.356632ms
Nov 12 11:24:18.327: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004229612s
Nov 12 11:24:20.330: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006907475s
Nov 12 11:24:22.333: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009892742s
Nov 12 11:24:24.337: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013956358s
Nov 12 11:24:26.339: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016602525s
STEP: Saw pod success
Nov 12 11:24:26.339: INFO: Pod "downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5" satisfied condition "success or failure"
Nov 12 11:24:26.344: INFO: Trying to get logs from node node3 pod downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5 container client-container: 
STEP: delete the pod
Nov 12 11:24:26.365: INFO: Waiting for pod downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5 to disappear
Nov 12 11:24:26.366: INFO: Pod downwardapi-volume-fbdaf7b9-5c77-4bf1-af56-0ce202dacca5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:24:26.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5437" for this suite.

• [SLOW TEST:10.065 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3878,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:24:26.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:24:26.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91" in namespace "downward-api-6849" to be "success or failure"
Nov 12 11:24:26.394: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 1.721749ms
Nov 12 11:24:28.396: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004183331s
Nov 12 11:24:30.399: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00672429s
Nov 12 11:24:32.401: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009440783s
Nov 12 11:24:34.404: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01196254s
Nov 12 11:24:36.407: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015336158s
Nov 12 11:24:38.413: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.020962227s
STEP: Saw pod success
Nov 12 11:24:38.414: INFO: Pod "downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91" satisfied condition "success or failure"
Nov 12 11:24:38.418: INFO: Trying to get logs from node node1 pod downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91 container client-container: 
STEP: delete the pod
Nov 12 11:24:38.429: INFO: Waiting for pod downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91 to disappear
Nov 12 11:24:38.430: INFO: Pod downwardapi-volume-4b1d700b-6fa3-4b1c-a829-0c94ab5acb91 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:24:38.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6849" for this suite.

• [SLOW TEST:12.063 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3884,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:24:38.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Nov 12 11:24:38.451: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:24:41.390: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:24:51.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3045" for this suite.

• [SLOW TEST:12.998 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":240,"skipped":3890,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:24:51.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Nov 12 11:24:51.449: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:25:02.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8327" for this suite.

• [SLOW TEST:11.393 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":241,"skipped":3900,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:25:02.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Nov 12 11:25:12.854: INFO: Pod pod-hostip-e0ee4bf1-137e-4b90-8b1a-88df9fe1d40d has hostIP: 10.0.20.14
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:25:12.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-827" for this suite.

• [SLOW TEST:10.032 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3922,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:25:12.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:25:12.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039" in namespace "projected-5163" to be "success or failure"
Nov 12 11:25:12.879: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Pending", Reason="", readiness=false. Elapsed: 1.300831ms
Nov 12 11:25:14.882: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003726152s
Nov 12 11:25:16.884: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00641854s
Nov 12 11:25:18.887: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009136804s
Nov 12 11:25:20.890: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011602071s
Nov 12 11:25:22.892: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014093955s
STEP: Saw pod success
Nov 12 11:25:22.892: INFO: Pod "downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039" satisfied condition "success or failure"
Nov 12 11:25:22.894: INFO: Trying to get logs from node node3 pod downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039 container client-container: 
STEP: delete the pod
Nov 12 11:25:22.903: INFO: Waiting for pod downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039 to disappear
Nov 12 11:25:22.905: INFO: Pod downwardapi-volume-0bc63cf3-d660-4ea3-962e-eca6937d4039 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:25:22.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5163" for this suite.

• [SLOW TEST:10.054 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3926,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:25:22.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:25:32.946: INFO: Waiting up to 5m0s for pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d" in namespace "pods-7084" to be "success or failure"
Nov 12 11:25:32.948: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.726247ms
Nov 12 11:25:34.950: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004342597s
Nov 12 11:25:36.953: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006914324s
Nov 12 11:25:38.956: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010387256s
Nov 12 11:25:40.959: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012854268s
Nov 12 11:25:42.962: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016039045s
STEP: Saw pod success
Nov 12 11:25:42.962: INFO: Pod "client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d" satisfied condition "success or failure"
Nov 12 11:25:42.964: INFO: Trying to get logs from node node1 pod client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d container env3cont: 
STEP: delete the pod
Nov 12 11:25:42.974: INFO: Waiting for pod client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d to disappear
Nov 12 11:25:42.976: INFO: Pod client-envvars-a60981cf-7e1a-4495-9d9f-f6508709329d no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:25:42.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7084" for this suite.

• [SLOW TEST:20.067 seconds]
[k8s.io] Pods
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3934,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:25:42.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-bc664e89-74e4-403d-935b-c959c0e0242d
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-bc664e89-74e4-403d-935b-c959c0e0242d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:25:57.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9537" for this suite.

• [SLOW TEST:14.063 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4000,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:25:57.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 12 11:26:07.099: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:26:07.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1005" for this suite.

• [SLOW TEST:10.062 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4070,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:26:07.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-2521c0ed-1a7c-4038-83a8-3c658926eb38
STEP: Creating a pod to test consume configMaps
Nov 12 11:26:07.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5" in namespace "configmap-543" to be "success or failure"
Nov 12 11:26:07.131: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.47182ms
Nov 12 11:26:09.134: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004006822s
Nov 12 11:26:11.136: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006793117s
Nov 12 11:26:13.139: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009535517s
Nov 12 11:26:15.142: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012493751s
Nov 12 11:26:17.144: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.014713913s
STEP: Saw pod success
Nov 12 11:26:17.144: INFO: Pod "pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5" satisfied condition "success or failure"
Nov 12 11:26:17.149: INFO: Trying to get logs from node node2 pod pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5 container configmap-volume-test: 
STEP: delete the pod
Nov 12 11:26:17.344: INFO: Waiting for pod pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5 to disappear
Nov 12 11:26:17.346: INFO: Pod pod-configmaps-61c51551-aae0-4be4-85e8-1e8441dd16f5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:26:17.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-543" for this suite.

• [SLOW TEST:10.241 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4111,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:26:17.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1983.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1983.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1983.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1983.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 12 11:26:29.396: INFO: DNS probes using dns-1983/dns-test-96de71a8-64ab-4d5a-971d-e13c6c719b5f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:26:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1983" for this suite.

• [SLOW TEST:12.055 seconds]
[sig-network] DNS
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":248,"skipped":4123,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:26:29.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Nov 12 11:26:29.426: INFO: Waiting up to 5m0s for pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a" in namespace "emptydir-3454" to be "success or failure"
Nov 12 11:26:29.428: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.850614ms
Nov 12 11:26:31.431: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004301476s
Nov 12 11:26:33.433: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006566558s
Nov 12 11:26:35.436: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009457174s
Nov 12 11:26:37.439: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01251443s
Nov 12 11:26:39.441: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015020798s
STEP: Saw pod success
Nov 12 11:26:39.441: INFO: Pod "pod-2a757e68-2334-486e-80e0-8edae3e2084a" satisfied condition "success or failure"
Nov 12 11:26:39.443: INFO: Trying to get logs from node node2 pod pod-2a757e68-2334-486e-80e0-8edae3e2084a container test-container: 
STEP: delete the pod
Nov 12 11:26:39.454: INFO: Waiting for pod pod-2a757e68-2334-486e-80e0-8edae3e2084a to disappear
Nov 12 11:26:39.456: INFO: Pod pod-2a757e68-2334-486e-80e0-8edae3e2084a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:26:39.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3454" for this suite.

• [SLOW TEST:10.055 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4131,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:26:39.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:26:39.862: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:26:41.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:26:43.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:26:45.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:26:47.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777199, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:26:50.875: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Nov 12 11:26:50.890: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:26:50.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3917" for this suite.
STEP: Destroying namespace "webhook-3917-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.462 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":250,"skipped":4147,"failed":0}
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:26:50.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:26:50.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0" in namespace "downward-api-2751" to be "success or failure"
Nov 12 11:26:50.943: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.574294ms
Nov 12 11:26:52.946: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004113121s
Nov 12 11:26:54.948: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006972205s
Nov 12 11:26:56.951: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009746915s
Nov 12 11:26:58.954: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01279351s
Nov 12 11:27:00.956: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015056639s
STEP: Saw pod success
Nov 12 11:27:00.957: INFO: Pod "downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0" satisfied condition "success or failure"
Nov 12 11:27:00.958: INFO: Trying to get logs from node node2 pod downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0 container client-container: 
STEP: delete the pod
Nov 12 11:27:00.966: INFO: Waiting for pod downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0 to disappear
Nov 12 11:27:00.967: INFO: Pod downwardapi-volume-870aab13-0c05-494b-b1e7-e131975ad6b0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:27:00.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2751" for this suite.

• [SLOW TEST:10.048 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4147,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:27:00.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Nov 12 11:27:00.989: INFO: Waiting up to 5m0s for pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff" in namespace "emptydir-2734" to be "success or failure"
Nov 12 11:27:00.990: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Pending", Reason="", readiness=false. Elapsed: 1.3855ms
Nov 12 11:27:02.992: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003888093s
Nov 12 11:27:04.995: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006618774s
Nov 12 11:27:06.998: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009090698s
Nov 12 11:27:09.000: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011471476s
Nov 12 11:27:11.002: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.01379812s
STEP: Saw pod success
Nov 12 11:27:11.002: INFO: Pod "pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff" satisfied condition "success or failure"
Nov 12 11:27:11.004: INFO: Trying to get logs from node node1 pod pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff container test-container: 
STEP: delete the pod
Nov 12 11:27:11.014: INFO: Waiting for pod pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff to disappear
Nov 12 11:27:11.016: INFO: Pod pod-52b1f15a-b5d0-4dea-a87a-6a534bb659ff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:27:11.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2734" for this suite.

• [SLOW TEST:10.049 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4157,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:27:11.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-c08e1c35-a5c6-409f-a26c-67e9e02fa8dc
STEP: Creating a pod to test consume configMaps
Nov 12 11:27:11.044: INFO: Waiting up to 5m0s for pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133" in namespace "configmap-9533" to be "success or failure"
Nov 12 11:27:11.046: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 1.978093ms
Nov 12 11:27:13.048: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004479083s
Nov 12 11:27:15.051: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007248719s
Nov 12 11:27:17.054: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010107984s
Nov 12 11:27:19.056: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012746299s
Nov 12 11:27:21.059: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Pending", Reason="", readiness=false. Elapsed: 10.015269245s
Nov 12 11:27:23.068: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.023873228s
STEP: Saw pod success
Nov 12 11:27:23.068: INFO: Pod "pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133" satisfied condition "success or failure"
Nov 12 11:27:23.070: INFO: Trying to get logs from node node2 pod pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133 container configmap-volume-test: 
STEP: delete the pod
Nov 12 11:27:23.080: INFO: Waiting for pod pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133 to disappear
Nov 12 11:27:23.081: INFO: Pod pod-configmaps-da63d45f-b947-48c4-801c-59c3d73b9133 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:27:23.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9533" for this suite.

• [SLOW TEST:12.065 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4161,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:27:23.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W1112 11:27:24.126321      10 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 12 11:27:24.126: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:27:24.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4823" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":254,"skipped":4178,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:27:24.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:27:24.150: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e" in namespace "projected-5761" to be "success or failure"
Nov 12 11:27:24.151: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.395372ms
Nov 12 11:27:26.153: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00367261s
Nov 12 11:27:28.155: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005411707s
Nov 12 11:27:30.165: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015343398s
Nov 12 11:27:32.168: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017988823s
Nov 12 11:27:34.170: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02049418s
STEP: Saw pod success
Nov 12 11:27:34.170: INFO: Pod "downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e" satisfied condition "success or failure"
Nov 12 11:27:34.172: INFO: Trying to get logs from node node4 pod downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e container client-container: 
STEP: delete the pod
Nov 12 11:27:34.189: INFO: Waiting for pod downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e to disappear
Nov 12 11:27:34.191: INFO: Pod downwardapi-volume-6038e170-ab92-4c8a-b600-6de26fc5ed1e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:27:34.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5761" for this suite.

• [SLOW TEST:10.064 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4181,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:27:34.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:28:04.218: INFO: Container started at 2020-11-12 11:27:43 +0000 UTC, pod became ready at 2020-11-12 11:28:03 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:28:04.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9241" for this suite.

• [SLOW TEST:30.028 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4183,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:28:04.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c
Nov 12 11:28:04.242: INFO: Pod name my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c: Found 0 pods out of 1
Nov 12 11:28:09.244: INFO: Pod name my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c: Found 1 pods out of 1
Nov 12 11:28:09.244: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c" are running
Nov 12 11:28:15.248: INFO: Pod "my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c-rhxxh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 11:28:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 11:28:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 11:28:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-12 11:28:04 +0000 UTC Reason: Message:}])
Nov 12 11:28:15.248: INFO: Trying to dial the pod
Nov 12 11:28:20.256: INFO: Controller my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c: Got expected result from replica 1 [my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c-rhxxh]: "my-hostname-basic-9bdf5562-cba4-4823-ba18-23dfe8fe246c-rhxxh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:28:20.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8698" for this suite.

• [SLOW TEST:16.039 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":257,"skipped":4191,"failed":0}
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:28:20.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:28:36.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-923" for this suite.

• [SLOW TEST:16.070 seconds]
[sig-api-machinery] ResourceQuota
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":258,"skipped":4191,"failed":0}
SSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:28:36.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Nov 12 11:28:48.368: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.368: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.486: INFO: Exec stderr: ""
Nov 12 11:28:48.486: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.486: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.589: INFO: Exec stderr: ""
Nov 12 11:28:48.589: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.589: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.688: INFO: Exec stderr: ""
Nov 12 11:28:48.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.688: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.778: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Nov 12 11:28:48.778: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.778: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.868: INFO: Exec stderr: ""
Nov 12 11:28:48.868: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.868: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:48.954: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Nov 12 11:28:48.954: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:48.954: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:49.066: INFO: Exec stderr: ""
Nov 12 11:28:49.066: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:49.066: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:49.161: INFO: Exec stderr: ""
Nov 12 11:28:49.161: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:49.161: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:49.254: INFO: Exec stderr: ""
Nov 12 11:28:49.254: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5355 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 12 11:28:49.254: INFO: >>> kubeConfig: /root/.kube/config
Nov 12 11:28:49.344: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:28:49.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5355" for this suite.

• [SLOW TEST:13.016 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4196,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:28:49.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:28:49.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:28:51.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:28:53.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:28:55.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:28:57.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:28:59.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777329, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:29:02.952: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:29:02.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9486" for this suite.
STEP: Destroying namespace "webhook-9486-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.637 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":260,"skipped":4225,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:29:02.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Nov 12 11:29:03.006: INFO: Waiting up to 5m0s for pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a" in namespace "downward-api-1950" to be "success or failure"
Nov 12 11:29:03.008: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.600675ms
Nov 12 11:29:05.010: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004016355s
Nov 12 11:29:07.012: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00631034s
Nov 12 11:29:09.015: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008905038s
Nov 12 11:29:11.018: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011629356s
Nov 12 11:29:13.022: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016069253s
STEP: Saw pod success
Nov 12 11:29:13.022: INFO: Pod "downward-api-83824d41-055e-4fae-b566-dc4785e7801a" satisfied condition "success or failure"
Nov 12 11:29:13.024: INFO: Trying to get logs from node node1 pod downward-api-83824d41-055e-4fae-b566-dc4785e7801a container dapi-container: 
STEP: delete the pod
Nov 12 11:29:13.040: INFO: Waiting for pod downward-api-83824d41-055e-4fae-b566-dc4785e7801a to disappear
Nov 12 11:29:13.044: INFO: Pod downward-api-83824d41-055e-4fae-b566-dc4785e7801a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:29:13.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1950" for this suite.

• [SLOW TEST:10.062 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4232,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:29:13.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c8q4k in namespace proxy-6288
I1112 11:29:13.072723      10 runners.go:189] Created replication controller with name: proxy-service-c8q4k, namespace: proxy-6288, replica count: 1
I1112 11:29:14.123155      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:15.123409      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:16.123694      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:17.123953      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:18.124232      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:19.124513      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:20.124699      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:21.124897      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:22.125083      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1112 11:29:23.127517      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1112 11:29:24.130495      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1112 11:29:25.130758      10 runners.go:189] proxy-service-c8q4k Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 12 11:29:25.132: INFO: setup took 12.066149103s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 4.139192ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 4.089414ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 4.15749ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 4.078126ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 4.083033ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 4.284899ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 4.097527ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 4.175183ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 4.334968ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 4.207589ms)
Nov 12 11:29:25.137: INFO: (0) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 4.141892ms)
Nov 12 11:29:25.142: INFO: (0) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 9.213831ms)
Nov 12 11:29:25.142: INFO: (0) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 9.36594ms)
Nov 12 11:29:25.142: INFO: (0) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 9.208907ms)
Nov 12 11:29:25.142: INFO: (0) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 1.926864ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.517007ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.656495ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.672823ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.673225ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.672299ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: ... (200; 2.774492ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.870796ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.90166ms)
Nov 12 11:29:25.145: INFO: (1) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.887939ms)
Nov 12 11:29:25.157: INFO: (1) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 14.634246ms)
Nov 12 11:29:25.157: INFO: (1) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 14.630105ms)
Nov 12 11:29:25.157: INFO: (1) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 14.709093ms)
Nov 12 11:29:25.159: INFO: (2) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 1.660417ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.742303ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.778903ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.790399ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.763154ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 3.141991ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 3.401131ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 3.470319ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 3.447644ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 3.421278ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 3.559913ms)
Nov 12 11:29:25.160: INFO: (2) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 1.621183ms)
Nov 12 11:29:25.162: INFO: (3) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: ... (200; 1.837977ms)
Nov 12 11:29:25.163: INFO: (3) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 1.829426ms)
Nov 12 11:29:25.163: INFO: (3) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.15723ms)
Nov 12 11:29:25.163: INFO: (3) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.443785ms)
Nov 12 11:29:25.163: INFO: (3) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.474057ms)
Nov 12 11:29:25.163: INFO: (3) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.730249ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.731913ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.744649ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.820993ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.804521ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.868593ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.768417ms)
Nov 12 11:29:25.164: INFO: (3) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.905805ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 1.883351ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 2.210182ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.18512ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.506338ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.581311ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.698767ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.612419ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.610489ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.643621ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.751251ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.785539ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.750409ms)
Nov 12 11:29:25.166: INFO: (4) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.842983ms)
Nov 12 11:29:25.167: INFO: (4) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.786512ms)
Nov 12 11:29:25.167: INFO: (4) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.777697ms)
Nov 12 11:29:25.169: INFO: (5) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.408824ms)
Nov 12 11:29:25.169: INFO: (5) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.363592ms)
Nov 12 11:29:25.169: INFO: (5) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.372334ms)
Nov 12 11:29:25.169: INFO: (5) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.8819ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.887721ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 3.007466ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.960547ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.934237ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 2.991716ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 3.207094ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 3.219446ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 3.236829ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 3.244417ms)
Nov 12 11:29:25.170: INFO: (5) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 3.311697ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 1.623747ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 1.810973ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 1.820402ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.329015ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.415575ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.443632ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.381164ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.418764ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.451984ms)
Nov 12 11:29:25.172: INFO: (6) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 2.414665ms)
Nov 12 11:29:25.175: INFO: (7) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.413989ms)
Nov 12 11:29:25.175: INFO: (7) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.424757ms)
Nov 12 11:29:25.175: INFO: (7) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.409911ms)
Nov 12 11:29:25.175: INFO: (7) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.429315ms)
Nov 12 11:29:25.175: INFO: (7) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.418044ms)
Nov 12 11:29:25.176: INFO: (7) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.765172ms)
Nov 12 11:29:25.176: INFO: (7) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.976704ms)
Nov 12 11:29:25.176: INFO: (7) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.984579ms)
Nov 12 11:29:25.176: INFO: (7) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 3.027532ms)
Nov 12 11:29:25.176: INFO: (7) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 3.059498ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.284045ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.344934ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.235153ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.386328ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.399324ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.416195ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.483334ms)
Nov 12 11:29:25.178: INFO: (8) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.495913ms)
Nov 12 11:29:25.179: INFO: (8) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.54142ms)
Nov 12 11:29:25.179: INFO: (8) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.56581ms)
Nov 12 11:29:25.179: INFO: (8) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.602944ms)
Nov 12 11:29:25.179: INFO: (8) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 2.745541ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.729073ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.830997ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.879771ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.82927ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.827346ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.90051ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.886351ms)
Nov 12 11:29:25.182: INFO: (9) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 1.913047ms)
Nov 12 11:29:25.184: INFO: (10) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: ... (200; 2.154102ms)
Nov 12 11:29:25.184: INFO: (10) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.346173ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 3.050093ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 3.145223ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 3.097532ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 3.084536ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 3.097064ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 3.057427ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 3.078976ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 3.148046ms)
Nov 12 11:29:25.185: INFO: (10) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 3.100467ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.253932ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.389195ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 2.454838ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.580724ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.551078ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.579322ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.586189ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.623324ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.724706ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.679544ms)
Nov 12 11:29:25.188: INFO: (11) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.72423ms)
Nov 12 11:29:25.189: INFO: (11) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.845853ms)
Nov 12 11:29:25.189: INFO: (11) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.848519ms)
Nov 12 11:29:25.189: INFO: (11) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.884819ms)
Nov 12 11:29:25.189: INFO: (11) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.868932ms)
Nov 12 11:29:25.191: INFO: (12) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.489729ms)
Nov 12 11:29:25.191: INFO: (12) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.488542ms)
Nov 12 11:29:25.191: INFO: (12) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.527116ms)
Nov 12 11:29:25.191: INFO: (12) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.577146ms)
Nov 12 11:29:25.191: INFO: (12) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.532806ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: ... (200; 2.852539ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.87648ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.93009ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.905539ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.934329ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 3.020607ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 3.051568ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 3.095188ms)
Nov 12 11:29:25.192: INFO: (12) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 3.026053ms)
Nov 12 11:29:25.194: INFO: (13) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 1.883767ms)
Nov 12 11:29:25.194: INFO: (13) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.531423ms)
Nov 12 11:29:25.194: INFO: (13) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.495595ms)
Nov 12 11:29:25.194: INFO: (13) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.522896ms)
Nov 12 11:29:25.194: INFO: (13) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.579887ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.535628ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.656427ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.601746ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.604426ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.798512ms)
Nov 12 11:29:25.195: INFO: (13) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 2.020478ms)
Nov 12 11:29:25.197: INFO: (14) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.099012ms)
Nov 12 11:29:25.197: INFO: (14) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.078122ms)
Nov 12 11:29:25.197: INFO: (14) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.049349ms)
Nov 12 11:29:25.197: INFO: (14) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.069905ms)
Nov 12 11:29:25.197: INFO: (14) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.096623ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.990948ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.929024ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.961807ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 3.035888ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 3.041473ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.97851ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.969588ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.993501ms)
Nov 12 11:29:25.198: INFO: (14) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 3.038649ms)
Nov 12 11:29:25.199: INFO: (15) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 1.453742ms)
Nov 12 11:29:25.200: INFO: (15) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.167694ms)
Nov 12 11:29:25.200: INFO: (15) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: ... (200; 2.312808ms)
Nov 12 11:29:25.200: INFO: (15) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.473018ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.455211ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.58291ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.698041ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.655306ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.612314ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.650866ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.650215ms)
Nov 12 11:29:25.201: INFO: (15) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.62506ms)
Nov 12 11:29:25.203: INFO: (16) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 1.742226ms)
Nov 12 11:29:25.203: INFO: (16) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 1.785872ms)
Nov 12 11:29:25.203: INFO: (16) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 1.829163ms)
Nov 12 11:29:25.203: INFO: (16) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.668994ms)
Nov 12 11:29:25.203: INFO: (16) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.684869ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.777808ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.81829ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.907388ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.927015ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.946042ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.920411ms)
Nov 12 11:29:25.204: INFO: (16) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 3.151472ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.002479ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.003238ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.081899ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.16611ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 2.171988ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test<... (200; 2.223973ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.426541ms)
Nov 12 11:29:25.206: INFO: (17) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.422894ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.562045ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.702016ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.681626ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.709641ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.709587ms)
Nov 12 11:29:25.207: INFO: (17) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.798087ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.001579ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl/proxy/: test (200; 1.971195ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.033355ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.265439ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.432508ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 2.325771ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.401259ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.352965ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.460096ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.472304ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.434657ms)
Nov 12 11:29:25.209: INFO: (18) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:443/proxy/: test (200; 1.636534ms)
Nov 12 11:29:25.211: INFO: (19) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 1.682947ms)
Nov 12 11:29:25.211: INFO: (19) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 1.728647ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:460/proxy/: tls baz (200; 1.917054ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname1/proxy/: foo (200; 2.005682ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/proxy-service-c8q4k-znzjl:1080/proxy/: test<... (200; 2.061574ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/https:proxy-service-c8q4k-znzjl:462/proxy/: tls qux (200; 2.000961ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:1080/proxy/: ... (200; 2.052388ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:162/proxy/: bar (200; 2.070865ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/pods/http:proxy-service-c8q4k-znzjl:160/proxy/: foo (200; 2.047157ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname1/proxy/: foo (200; 2.667312ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/http:proxy-service-c8q4k:portname2/proxy/: bar (200; 2.600184ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/proxy-service-c8q4k:portname2/proxy/: bar (200; 2.629868ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname1/proxy/: tls baz (200; 2.606841ms)
Nov 12 11:29:25.212: INFO: (19) /api/v1/namespaces/proxy-6288/services/https:proxy-service-c8q4k:tlsportname2/proxy/: tls qux (200; 2.649585ms)
STEP: deleting ReplicationController proxy-service-c8q4k in namespace proxy-6288, will wait for the garbage collector to delete the pods
Nov 12 11:29:25.267: INFO: Deleting ReplicationController proxy-service-c8q4k took: 2.83273ms
Nov 12 11:29:25.367: INFO: Terminating ReplicationController proxy-service-c8q4k pods took: 100.244192ms
[AfterEach] version v1
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:29:39.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6288" for this suite.

• [SLOW TEST:26.122 seconds]
[sig-network] Proxy
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":262,"skipped":4308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:29:39.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Nov 12 11:29:39.703: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Nov 12 11:29:41.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:29:43.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:29:45.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 12 11:29:47.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63740777379, loc:(*time.Location)(0x7939680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 12 11:29:50.717: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:29:50.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-876-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:29:51.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4640" for this suite.
STEP: Destroying namespace "webhook-4640-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.717 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":263,"skipped":4370,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:29:51.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Nov 12 11:29:51.918: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Nov 12 11:29:51.922: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:51.924: INFO: Number of nodes with available pods: 0
Nov 12 11:29:51.924: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:52.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:52.930: INFO: Number of nodes with available pods: 0
Nov 12 11:29:52.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:53.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:53.930: INFO: Number of nodes with available pods: 0
Nov 12 11:29:53.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:54.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:54.930: INFO: Number of nodes with available pods: 0
Nov 12 11:29:54.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:55.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:55.931: INFO: Number of nodes with available pods: 0
Nov 12 11:29:55.931: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:56.927: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:56.929: INFO: Number of nodes with available pods: 0
Nov 12 11:29:56.929: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:57.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:57.930: INFO: Number of nodes with available pods: 0
Nov 12 11:29:57.931: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:58.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:58.930: INFO: Number of nodes with available pods: 0
Nov 12 11:29:58.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:29:59.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:29:59.931: INFO: Number of nodes with available pods: 0
Nov 12 11:29:59.931: INFO: Node node1 is running more than one daemon pod
Nov 12 11:30:00.927: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:00.930: INFO: Number of nodes with available pods: 0
Nov 12 11:30:00.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:30:01.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:01.930: INFO: Number of nodes with available pods: 3
Nov 12 11:30:01.930: INFO: Node node1 is running more than one daemon pod
Nov 12 11:30:02.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:02.930: INFO: Number of nodes with available pods: 4
Nov 12 11:30:02.930: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Nov 12 11:30:02.946: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:02.946: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:02.946: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:02.946: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:02.948: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:03.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:03.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:03.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:03.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:03.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:04.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:04.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:04.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:04.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:04.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:05.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:05.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:05.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:05.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:05.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:06.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:06.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:06.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:06.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:06.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:06.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:07.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:07.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:07.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:07.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:07.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:07.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:08.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:08.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:08.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:08.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:08.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:08.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:09.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:09.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:09.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:09.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:09.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:09.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:10.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:10.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:10.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:10.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:10.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:10.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:11.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:11.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:11.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:11.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:11.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:11.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:12.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:12.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:12.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:12.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:12.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:12.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:13.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:13.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:13.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:13.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:13.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:13.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:14.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:14.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:14.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:14.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:14.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:14.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:15.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:15.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:15.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:15.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:15.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:15.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:16.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:16.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:16.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:16.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:16.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:16.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:17.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:17.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:17.951: INFO: Wrong image for pod: daemon-set-x7rrc. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:17.951: INFO: Pod daemon-set-x7rrc is not available
Nov 12 11:30:17.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:17.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:18.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:18.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:18.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:18.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:18.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:19.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:19.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:19.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:19.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:19.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:20.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:20.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:20.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:20.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:20.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:21.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:21.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:21.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:21.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:21.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:22.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:22.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:22.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:22.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:22.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:23.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:23.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:23.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:23.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:23.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:24.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:24.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:24.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:24.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:24.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:25.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:25.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:25.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:25.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:25.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:26.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:26.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:26.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:26.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:26.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:27.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:27.951: INFO: Pod daemon-set-lns8b is not available
Nov 12 11:30:27.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:27.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:27.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:28.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:28.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:28.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:28.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:29.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:29.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:29.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:29.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:30.952: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:30.952: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:30.952: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:30.952: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:30.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:31.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:31.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:31.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:31.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:31.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:32.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:32.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:32.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:32.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:32.956: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:33.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:33.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:33.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:33.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:33.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:34.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:34.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:34.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:34.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:34.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:35.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:35.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:35.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:35.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:35.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:36.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:36.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:36.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:36.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:36.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:37.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:37.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:37.951: INFO: Wrong image for pod: daemon-set-zfbmr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:37.951: INFO: Pod daemon-set-zfbmr is not available
Nov 12 11:30:37.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:38.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:38.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:38.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:38.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:39.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:39.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:39.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:39.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:40.952: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:40.952: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:40.952: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:40.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:41.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:41.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:41.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:41.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:42.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:42.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:42.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:42.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:43.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:43.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:43.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:43.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:44.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:44.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:44.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:44.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:45.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:45.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:45.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:45.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:46.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:46.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:46.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:46.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:47.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:47.951: INFO: Pod daemon-set-f84qp is not available
Nov 12 11:30:47.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:47.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:48.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:48.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:48.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:49.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:49.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:49.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:50.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:50.951: INFO: Wrong image for pod: daemon-set-plgfl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:50.951: INFO: Pod daemon-set-plgfl is not available
Nov 12 11:30:50.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:51.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:51.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:51.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:52.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:52.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:52.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:53.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:53.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:53.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:54.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:54.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:54.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:55.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:55.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:55.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:56.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:56.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:56.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:57.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:57.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:57.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:58.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:58.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:58.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:30:59.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:30:59.951: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:30:59.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:00.952: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:00.952: INFO: Pod daemon-set-rc5lc is not available
Nov 12 11:31:00.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:01.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:01.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:02.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:02.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:03.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:03.951: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:03.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:04.955: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:04.955: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:04.961: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:05.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:05.951: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:05.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:06.951: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:06.951: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:06.954: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:07.953: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:07.953: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:07.956: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:08.952: INFO: Wrong image for pod: daemon-set-88xc7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Nov 12 11:31:08.952: INFO: Pod daemon-set-88xc7 is not available
Nov 12 11:31:08.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:09.952: INFO: Pod daemon-set-vhljt is not available
Nov 12 11:31:09.955: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Nov 12 11:31:09.958: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:09.960: INFO: Number of nodes with available pods: 3
Nov 12 11:31:09.960: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:10.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:10.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:10.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:11.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:11.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:11.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:12.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:12.966: INFO: Number of nodes with available pods: 3
Nov 12 11:31:12.966: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:13.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:13.966: INFO: Number of nodes with available pods: 3
Nov 12 11:31:13.966: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:14.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:14.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:14.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:15.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:15.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:15.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:16.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:16.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:16.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:17.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:17.967: INFO: Number of nodes with available pods: 3
Nov 12 11:31:17.967: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:18.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:18.966: INFO: Number of nodes with available pods: 3
Nov 12 11:31:18.966: INFO: Node node4 is running more than one daemon pod
Nov 12 11:31:19.964: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 12 11:31:19.967: INFO: Number of nodes with available pods: 4
Nov 12 11:31:19.967: INFO: Number of running nodes: 4, number of available pods: 4
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6564, will wait for the garbage collector to delete the pods
Nov 12 11:31:20.034: INFO: Deleting DaemonSet.extensions daemon-set took: 3.548767ms
Nov 12 11:31:20.334: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230516ms
Nov 12 11:31:29.136: INFO: Number of nodes with available pods: 0
Nov 12 11:31:29.136: INFO: Number of running nodes: 0, number of available pods: 0
Nov 12 11:31:29.138: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6564/daemonsets","resourceVersion":"32469"},"items":null}

Nov 12 11:31:29.139: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6564/pods","resourceVersion":"32469"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:31:29.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6564" for this suite.

• [SLOW TEST:97.267 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":264,"skipped":4372,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:31:29.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-743637d5-7e72-4750-a515-97f8fc3d88b3
STEP: Creating a pod to test consume configMaps
Nov 12 11:31:29.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703" in namespace "configmap-947" to be "success or failure"
Nov 12 11:31:29.178: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Pending", Reason="", readiness=false. Elapsed: 1.641214ms
Nov 12 11:31:31.180: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003848389s
Nov 12 11:31:33.182: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006238761s
Nov 12 11:31:35.185: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008815539s
Nov 12 11:31:37.187: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011128197s
Nov 12 11:31:39.190: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.013899506s
STEP: Saw pod success
Nov 12 11:31:39.190: INFO: Pod "pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703" satisfied condition "success or failure"
Nov 12 11:31:39.192: INFO: Trying to get logs from node node1 pod pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703 container configmap-volume-test: 
STEP: delete the pod
Nov 12 11:31:39.210: INFO: Waiting for pod pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703 to disappear
Nov 12 11:31:39.211: INFO: Pod pod-configmaps-3785304d-f07a-47d0-9656-5cb358e92703 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:31:39.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-947" for this suite.

• [SLOW TEST:10.058 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4384,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:31:39.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Nov 12 11:31:39.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01" in namespace "downward-api-4940" to be "success or failure"
Nov 12 11:31:39.236: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Pending", Reason="", readiness=false. Elapsed: 1.76918ms
Nov 12 11:31:41.239: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004122386s
Nov 12 11:31:43.242: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007282417s
Nov 12 11:31:45.244: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009748092s
Nov 12 11:31:47.247: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012491955s
Nov 12 11:31:49.251: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016945911s
STEP: Saw pod success
Nov 12 11:31:49.252: INFO: Pod "downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01" satisfied condition "success or failure"
Nov 12 11:31:49.253: INFO: Trying to get logs from node node2 pod downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01 container client-container: 
STEP: delete the pod
Nov 12 11:31:49.273: INFO: Waiting for pod downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01 to disappear
Nov 12 11:31:49.275: INFO: Pod downwardapi-volume-0cef1ab1-128e-43ac-8dca-2cc9ddd6ad01 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:31:49.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4940" for this suite.

• [SLOW TEST:10.064 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4400,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:31:49.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Nov 12 11:31:49.297: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Nov 12 11:31:49.306: INFO: Waiting for terminating namespaces to be deleted...
Nov 12 11:31:49.308: INFO: 
Logging pods the kubelet thinks is on node node1 before test
Nov 12 11:31:49.314: INFO: nodelocaldns-kpvsh from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 11:31:49.314: INFO: tiller-deploy-58f6ff6c77-zrmnw from kube-system started at 2020-11-12 09:47:10 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container tiller ready: true, restart count 1
Nov 12 11:31:49.314: INFO: registry-proxy-txrdh from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:31:49.314: INFO: kube-proxy-m6bqr from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 11:31:49.314: INFO: nginx-proxy-node1 from kube-system started at 2020-11-12 09:44:33 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:31:49.314: INFO: kube-flannel-z5kqm from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:31:49.314: INFO: 	Container kube-flannel ready: true, restart count 3
Nov 12 11:31:49.314: INFO: kube-multus-ds-amd64-k4qcb from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.314: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:31:49.314: INFO: 
Logging pods the kubelet thinks is on node node2 before test
Nov 12 11:31:49.319: INFO: kube-multus-ds-amd64-8cjwp from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:31:49.319: INFO: nodelocaldns-ss57m from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 11:31:49.319: INFO: registry-proxy-lsxh9 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:31:49.319: INFO: nginx-proxy-node2 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:31:49.319: INFO: kube-proxy-bbzk5 from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 11:31:49.319: INFO: kube-flannel-gsk24 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:31:49.319: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:31:49.319: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 11:31:49.319: INFO: 
Logging pods the kubelet thinks is on node node3 before test
Nov 12 11:31:49.332: INFO: nodelocaldns-jw5xn from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.332: INFO: 	Container node-cache ready: true, restart count 2
Nov 12 11:31:49.332: INFO: kube-proxy-4b76p from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.332: INFO: 	Container kube-proxy ready: true, restart count 2
Nov 12 11:31:49.333: INFO: kube-multus-ds-amd64-vwl4k from kube-system started at 2020-11-12 09:45:48 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.333: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:31:49.333: INFO: registry-proxy-njmcx from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.333: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:31:49.333: INFO: nginx-proxy-node3 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.333: INFO: 	Container nginx-proxy ready: true, restart count 2
Nov 12 11:31:49.333: INFO: registry-9pgcj from kube-system started at 2020-11-12 09:47:38 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.333: INFO: 	Container registry ready: true, restart count 1
Nov 12 11:31:49.333: INFO: kube-flannel-r9726 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:31:49.333: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:31:49.333: INFO: 	Container kube-flannel ready: true, restart count 1
Nov 12 11:31:49.333: INFO: 
Logging pods the kubelet thinks is on node node4 before test
Nov 12 11:31:49.344: INFO: nodelocaldns-4cm4z from kube-system started at 2020-11-12 09:46:32 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container node-cache ready: true, restart count 1
Nov 12 11:31:49.345: INFO: kube-proxy-qsp5l from kube-system started at 2020-11-12 09:44:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container kube-proxy ready: true, restart count 1
Nov 12 11:31:49.345: INFO: registry-proxy-zvv86 from kube-system started at 2020-11-12 09:47:40 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container registry-proxy ready: true, restart count 1
Nov 12 11:31:49.345: INFO: nginx-proxy-node4 from kube-system started at 2020-11-12 09:44:34 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container nginx-proxy ready: true, restart count 1
Nov 12 11:31:49.345: INFO: kube-flannel-jbkp2 from kube-system started at 2020-11-12 09:45:39 +0000 UTC (2 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container install-cni ready: true, restart count 1
Nov 12 11:31:49.345: INFO: 	Container kube-flannel ready: true, restart count 2
Nov 12 11:31:49.345: INFO: kube-multus-ds-amd64-44jqf from kube-system started at 2020-11-12 09:45:49 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container kube-multus ready: true, restart count 1
Nov 12 11:31:49.345: INFO: coredns-58687784f9-c4bt6 from kube-system started at 2020-11-12 09:46:39 +0000 UTC (1 container statuses recorded)
Nov 12 11:31:49.345: INFO: 	Container coredns ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node node1
STEP: verifying the node has the label node node2
STEP: verifying the node has the label node node3
STEP: verifying the node has the label node node4
Nov 12 11:31:49.377: INFO: Pod coredns-58687784f9-c4bt6 requesting resource cpu=100m on Node node4
Nov 12 11:31:49.377: INFO: Pod kube-flannel-gsk24 requesting resource cpu=150m on Node node2
Nov 12 11:31:49.378: INFO: Pod kube-flannel-jbkp2 requesting resource cpu=150m on Node node4
Nov 12 11:31:49.378: INFO: Pod kube-flannel-r9726 requesting resource cpu=150m on Node node3
Nov 12 11:31:49.378: INFO: Pod kube-flannel-z5kqm requesting resource cpu=150m on Node node1
Nov 12 11:31:49.378: INFO: Pod kube-multus-ds-amd64-44jqf requesting resource cpu=100m on Node node4
Nov 12 11:31:49.378: INFO: Pod kube-multus-ds-amd64-8cjwp requesting resource cpu=100m on Node node2
Nov 12 11:31:49.378: INFO: Pod kube-multus-ds-amd64-k4qcb requesting resource cpu=100m on Node node1
Nov 12 11:31:49.378: INFO: Pod kube-multus-ds-amd64-vwl4k requesting resource cpu=100m on Node node3
Nov 12 11:31:49.378: INFO: Pod kube-proxy-4b76p requesting resource cpu=0m on Node node3
Nov 12 11:31:49.378: INFO: Pod kube-proxy-bbzk5 requesting resource cpu=0m on Node node2
Nov 12 11:31:49.378: INFO: Pod kube-proxy-m6bqr requesting resource cpu=0m on Node node1
Nov 12 11:31:49.378: INFO: Pod kube-proxy-qsp5l requesting resource cpu=0m on Node node4
Nov 12 11:31:49.378: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1
Nov 12 11:31:49.378: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2
Nov 12 11:31:49.378: INFO: Pod nginx-proxy-node3 requesting resource cpu=25m on Node node3
Nov 12 11:31:49.378: INFO: Pod nginx-proxy-node4 requesting resource cpu=25m on Node node4
Nov 12 11:31:49.378: INFO: Pod nodelocaldns-4cm4z requesting resource cpu=100m on Node node4
Nov 12 11:31:49.378: INFO: Pod nodelocaldns-jw5xn requesting resource cpu=100m on Node node3
Nov 12 11:31:49.378: INFO: Pod nodelocaldns-kpvsh requesting resource cpu=100m on Node node1
Nov 12 11:31:49.378: INFO: Pod nodelocaldns-ss57m requesting resource cpu=100m on Node node2
Nov 12 11:31:49.378: INFO: Pod registry-9pgcj requesting resource cpu=0m on Node node3
Nov 12 11:31:49.378: INFO: Pod registry-proxy-lsxh9 requesting resource cpu=0m on Node node2
Nov 12 11:31:49.378: INFO: Pod registry-proxy-njmcx requesting resource cpu=0m on Node node3
Nov 12 11:31:49.378: INFO: Pod registry-proxy-txrdh requesting resource cpu=0m on Node node1
Nov 12 11:31:49.378: INFO: Pod registry-proxy-zvv86 requesting resource cpu=0m on Node node4
Nov 12 11:31:49.378: INFO: Pod tiller-deploy-58f6ff6c77-zrmnw requesting resource cpu=0m on Node node1
STEP: Starting Pods to consume most of the cluster CPU.
Nov 12 11:31:49.378: INFO: Creating a pod which consumes cpu=33267m on Node node1
Nov 12 11:31:49.381: INFO: Creating a pod which consumes cpu=33267m on Node node2
Nov 12 11:31:49.383: INFO: Creating a pod which consumes cpu=33267m on Node node3
Nov 12 11:31:49.385: INFO: Creating a pod which consumes cpu=33197m on Node node4
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f.1646bf59bf3731af], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3785/filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f to node3]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f.1646bf5bd2744cbe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f.1646bf5bd7228c90], Reason = [Created], Message = [Created container filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f.1646bf5bdc37d03e], Reason = [Started], Message = [Started container filler-pod-1c4f5f6e-d5f3-4448-ad37-eb455f04bc7f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-299ad79a-9d99-441f-a834-a95dc1170359.1646bf59bf058e61], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3785/filler-pod-299ad79a-9d99-441f-a834-a95dc1170359 to node1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-299ad79a-9d99-441f-a834-a95dc1170359.1646bf5bd32dc1f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-299ad79a-9d99-441f-a834-a95dc1170359.1646bf5bd829de4b], Reason = [Created], Message = [Created container filler-pod-299ad79a-9d99-441f-a834-a95dc1170359]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-299ad79a-9d99-441f-a834-a95dc1170359.1646bf5bdd11eb54], Reason = [Started], Message = [Started container filler-pod-299ad79a-9d99-441f-a834-a95dc1170359]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97.1646bf59bf1b616b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3785/filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97 to node2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97.1646bf5bdf33eb20], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97.1646bf5be462da11], Reason = [Created], Message = [Created container filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97.1646bf5be97825f8], Reason = [Started], Message = [Started container filler-pod-b78f6c2d-c114-4a08-9004-de85fff5dd97]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522.1646bf59bf598a90], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3785/filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522 to node4]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522.1646bf5bd27ba1f4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522.1646bf5bd8018dc3], Reason = [Created], Message = [Created container filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522.1646bf5bddd1e771], Reason = [Started], Message = [Started container filler-pod-da119171-7b55-43bf-b9e8-00bc21dfd522]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1646bf5c8ba6fb2a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 4 Insufficient cpu.]
STEP: removing the label node off the node node1
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node node2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node node3
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node node4
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:32:02.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3785" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:13.164 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":267,"skipped":4408,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:32:02.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-1e737c77-7593-4770-a4e9-b2061865c78d in namespace container-probe-7115
Nov 12 11:32:12.468: INFO: Started pod liveness-1e737c77-7593-4770-a4e9-b2061865c78d in namespace container-probe-7115
STEP: checking the pod's current state and verifying that restartCount is present
Nov 12 11:32:12.470: INFO: Initial restart count of pod liveness-1e737c77-7593-4770-a4e9-b2061865c78d is 0
Nov 12 11:32:28.494: INFO: Restart count of pod container-probe-7115/liveness-1e737c77-7593-4770-a4e9-b2061865c78d is now 1 (16.0237961s elapsed)
Nov 12 11:32:48.521: INFO: Restart count of pod container-probe-7115/liveness-1e737c77-7593-4770-a4e9-b2061865c78d is now 2 (36.050759912s elapsed)
Nov 12 11:33:08.559: INFO: Restart count of pod container-probe-7115/liveness-1e737c77-7593-4770-a4e9-b2061865c78d is now 3 (56.088584337s elapsed)
Nov 12 11:33:28.586: INFO: Restart count of pod container-probe-7115/liveness-1e737c77-7593-4770-a4e9-b2061865c78d is now 4 (1m16.115975038s elapsed)
Nov 12 11:34:30.671: INFO: Restart count of pod container-probe-7115/liveness-1e737c77-7593-4770-a4e9-b2061865c78d is now 5 (2m18.201206936s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:34:30.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7115" for this suite.

• [SLOW TEST:148.236 seconds]
[k8s.io] Probing container
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4429,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:34:30.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-fb99c27c-a7c3-4023-8364-ead30b219099
STEP: Creating a pod to test consume configMaps
Nov 12 11:34:30.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08" in namespace "projected-4606" to be "success or failure"
Nov 12 11:34:30.709: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552383ms
Nov 12 11:34:32.711: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00673186s
Nov 12 11:34:34.714: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009690721s
Nov 12 11:34:36.717: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012541245s
Nov 12 11:34:38.720: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0157164s
Nov 12 11:34:40.722: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018312682s
Nov 12 11:34:42.725: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.020680801s
STEP: Saw pod success
Nov 12 11:34:42.725: INFO: Pod "pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08" satisfied condition "success or failure"
Nov 12 11:34:42.727: INFO: Trying to get logs from node node4 pod pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08 container projected-configmap-volume-test: 
STEP: delete the pod
Nov 12 11:34:42.743: INFO: Waiting for pod pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08 to disappear
Nov 12 11:34:42.745: INFO: Pod pod-projected-configmaps-f0ffc99e-d9d5-4b87-871c-51a251f45f08 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:34:42.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4606" for this suite.

• [SLOW TEST:12.068 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4440,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:34:42.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Nov 12 11:34:42.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1425'
Nov 12 11:34:43.097: INFO: stderr: ""
Nov 12 11:34:43.097: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 11:34:43.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1425'
Nov 12 11:34:43.229: INFO: stderr: ""
Nov 12 11:34:43.229: INFO: stdout: "update-demo-nautilus-4f84c update-demo-nautilus-mqtt9 "
Nov 12 11:34:43.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4f84c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:43.364: INFO: stderr: ""
Nov 12 11:34:43.364: INFO: stdout: ""
Nov 12 11:34:43.364: INFO: update-demo-nautilus-4f84c is created but not running
Nov 12 11:34:48.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1425'
Nov 12 11:34:48.519: INFO: stderr: ""
Nov 12 11:34:48.519: INFO: stdout: "update-demo-nautilus-4f84c update-demo-nautilus-mqtt9 "
Nov 12 11:34:48.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4f84c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:48.644: INFO: stderr: ""
Nov 12 11:34:48.644: INFO: stdout: ""
Nov 12 11:34:48.644: INFO: update-demo-nautilus-4f84c is created but not running
Nov 12 11:34:53.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1425'
Nov 12 11:34:53.766: INFO: stderr: ""
Nov 12 11:34:53.766: INFO: stdout: "update-demo-nautilus-4f84c update-demo-nautilus-mqtt9 "
Nov 12 11:34:53.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4f84c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:53.920: INFO: stderr: ""
Nov 12 11:34:53.920: INFO: stdout: ""
Nov 12 11:34:53.920: INFO: update-demo-nautilus-4f84c is created but not running
Nov 12 11:34:58.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1425'
Nov 12 11:34:59.060: INFO: stderr: ""
Nov 12 11:34:59.060: INFO: stdout: "update-demo-nautilus-4f84c update-demo-nautilus-mqtt9 "
Nov 12 11:34:59.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4f84c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:59.200: INFO: stderr: ""
Nov 12 11:34:59.200: INFO: stdout: "true"
Nov 12 11:34:59.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4f84c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:59.322: INFO: stderr: ""
Nov 12 11:34:59.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:34:59.322: INFO: validating pod update-demo-nautilus-4f84c
Nov 12 11:34:59.333: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:34:59.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:34:59.333: INFO: update-demo-nautilus-4f84c is verified up and running
Nov 12 11:34:59.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqtt9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:59.468: INFO: stderr: ""
Nov 12 11:34:59.468: INFO: stdout: "true"
Nov 12 11:34:59.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqtt9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1425'
Nov 12 11:34:59.619: INFO: stderr: ""
Nov 12 11:34:59.620: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:34:59.621: INFO: validating pod update-demo-nautilus-mqtt9
Nov 12 11:34:59.625: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:34:59.625: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:34:59.625: INFO: update-demo-nautilus-mqtt9 is verified up and running
STEP: using delete to clean up resources
Nov 12 11:34:59.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1425'
Nov 12 11:34:59.744: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:34:59.744: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Nov 12 11:34:59.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1425'
Nov 12 11:34:59.887: INFO: stderr: "No resources found in kubectl-1425 namespace.\n"
Nov 12 11:34:59.887: INFO: stdout: ""
Nov 12 11:34:59.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1425 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 12 11:35:00.043: INFO: stderr: ""
Nov 12 11:35:00.043: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:35:00.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1425" for this suite.

• [SLOW TEST:17.305 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":270,"skipped":4460,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:35:00.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-1856c54b-093b-41ae-a3bc-f6ec984af711
STEP: Creating a pod to test consume secrets
Nov 12 11:35:00.087: INFO: Waiting up to 5m0s for pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465" in namespace "secrets-4369" to be "success or failure"
Nov 12 11:35:00.089: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 1.495034ms
Nov 12 11:35:02.092: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004564979s
Nov 12 11:35:04.095: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007325013s
Nov 12 11:35:06.098: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010478332s
Nov 12 11:35:08.100: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013010992s
Nov 12 11:35:10.103: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Pending", Reason="", readiness=false. Elapsed: 10.01560585s
Nov 12 11:35:12.105: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.017925598s
STEP: Saw pod success
Nov 12 11:35:12.105: INFO: Pod "pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465" satisfied condition "success or failure"
Nov 12 11:35:12.107: INFO: Trying to get logs from node node3 pod pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465 container secret-volume-test: 
STEP: delete the pod
Nov 12 11:35:12.123: INFO: Waiting for pod pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465 to disappear
Nov 12 11:35:12.125: INFO: Pod pod-secrets-5c0ea29f-ea1a-4540-a8f3-eb5b60b91465 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:35:12.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4369" for this suite.

• [SLOW TEST:12.073 seconds]
[sig-storage] Secrets
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4471,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:35:12.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Nov 12 11:35:12.150: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33303 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 12 11:35:12.150: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33303 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Nov 12 11:35:22.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33335 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Nov 12 11:35:22.155: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33335 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Nov 12 11:35:32.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33357 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 12 11:35:32.161: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33357 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Nov 12 11:35:42.166: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33379 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 12 11:35:42.166: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-a 1bbb2987-66d9-4738-a23e-8b6de8e8582a 33379 0 2020-11-12 11:35:12 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Nov 12 11:35:52.171: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-b fd8463ab-4361-4fbd-9716-94abbc08f675 33401 0 2020-11-12 11:35:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 12 11:35:52.171: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-b fd8463ab-4361-4fbd-9716-94abbc08f675 33401 0 2020-11-12 11:35:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Nov 12 11:36:02.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-b fd8463ab-4361-4fbd-9716-94abbc08f675 33423 0 2020-11-12 11:35:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 12 11:36:02.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-6531 /api/v1/namespaces/watch-6531/configmaps/e2e-watch-test-configmap-b fd8463ab-4361-4fbd-9716-94abbc08f675 33423 0 2020-11-12 11:35:52 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:36:12.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6531" for this suite.

• [SLOW TEST:60.051 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":272,"skipped":4476,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:36:12.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Nov 12 11:36:22.727: INFO: Successfully updated pod "labelsupdatef2ac33ba-2c4b-4987-99ed-16ff7b1c3717"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:36:26.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1606" for this suite.

• [SLOW TEST:14.570 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4503,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:36:26.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Nov 12 11:36:26.776: INFO: Waiting up to 5m0s for pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93" in namespace "containers-5662" to be "success or failure"
Nov 12 11:36:26.778: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038144ms
Nov 12 11:36:28.780: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004760058s
Nov 12 11:36:30.783: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007270445s
Nov 12 11:36:32.786: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009875774s
Nov 12 11:36:34.789: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013093167s
Nov 12 11:36:36.792: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.016269422s
STEP: Saw pod success
Nov 12 11:36:36.792: INFO: Pod "client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93" satisfied condition "success or failure"
Nov 12 11:36:36.794: INFO: Trying to get logs from node node1 pod client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93 container test-container: 
STEP: delete the pod
Nov 12 11:36:36.804: INFO: Waiting for pod client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93 to disappear
Nov 12 11:36:36.806: INFO: Pod client-containers-cc0601d1-fe93-473c-b886-b5d4d0490c93 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:36:36.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5662" for this suite.

• [SLOW TEST:10.059 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4508,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:36:36.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Nov 12 11:36:36.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7526'
Nov 12 11:36:37.039: INFO: stderr: ""
Nov 12 11:36:37.039: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 11:36:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:37.175: INFO: stderr: ""
Nov 12 11:36:37.177: INFO: stdout: "update-demo-nautilus-86c2k update-demo-nautilus-nzp2k "
Nov 12 11:36:37.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:37.292: INFO: stderr: ""
Nov 12 11:36:37.292: INFO: stdout: ""
Nov 12 11:36:37.292: INFO: update-demo-nautilus-86c2k is created but not running
Nov 12 11:36:42.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:42.428: INFO: stderr: ""
Nov 12 11:36:42.428: INFO: stdout: "update-demo-nautilus-86c2k update-demo-nautilus-nzp2k "
Nov 12 11:36:42.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:42.569: INFO: stderr: ""
Nov 12 11:36:42.569: INFO: stdout: ""
Nov 12 11:36:42.569: INFO: update-demo-nautilus-86c2k is created but not running
Nov 12 11:36:47.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:47.683: INFO: stderr: ""
Nov 12 11:36:47.683: INFO: stdout: "update-demo-nautilus-86c2k update-demo-nautilus-nzp2k "
Nov 12 11:36:47.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:47.803: INFO: stderr: ""
Nov 12 11:36:47.803: INFO: stdout: "true"
Nov 12 11:36:47.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:47.933: INFO: stderr: ""
Nov 12 11:36:47.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:36:47.933: INFO: validating pod update-demo-nautilus-86c2k
Nov 12 11:36:47.937: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:36:47.938: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:36:47.939: INFO: update-demo-nautilus-86c2k is verified up and running
Nov 12 11:36:47.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzp2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:48.066: INFO: stderr: ""
Nov 12 11:36:48.066: INFO: stdout: ""
Nov 12 11:36:48.066: INFO: update-demo-nautilus-nzp2k is created but not running
Nov 12 11:36:53.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:53.189: INFO: stderr: ""
Nov 12 11:36:53.189: INFO: stdout: "update-demo-nautilus-86c2k update-demo-nautilus-nzp2k "
Nov 12 11:36:53.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:53.320: INFO: stderr: ""
Nov 12 11:36:53.320: INFO: stdout: "true"
Nov 12 11:36:53.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:53.466: INFO: stderr: ""
Nov 12 11:36:53.466: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:36:53.466: INFO: validating pod update-demo-nautilus-86c2k
Nov 12 11:36:53.469: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:36:53.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:36:53.469: INFO: update-demo-nautilus-86c2k is verified up and running
Nov 12 11:36:53.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzp2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:53.601: INFO: stderr: ""
Nov 12 11:36:53.601: INFO: stdout: "true"
Nov 12 11:36:53.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nzp2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:53.744: INFO: stderr: ""
Nov 12 11:36:53.744: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:36:53.744: INFO: validating pod update-demo-nautilus-nzp2k
Nov 12 11:36:53.748: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:36:53.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:36:53.748: INFO: update-demo-nautilus-nzp2k is verified up and running
STEP: scaling down the replication controller
Nov 12 11:36:53.755: INFO: scanned /root for discovery docs: 
Nov 12 11:36:53.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7526'
Nov 12 11:36:53.920: INFO: stderr: ""
Nov 12 11:36:53.920: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 11:36:53.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:54.050: INFO: stderr: ""
Nov 12 11:36:54.050: INFO: stdout: "update-demo-nautilus-86c2k update-demo-nautilus-nzp2k "
STEP: Replicas for name=update-demo: expected=1 actual=2
Nov 12 11:36:59.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:59.198: INFO: stderr: ""
Nov 12 11:36:59.199: INFO: stdout: "update-demo-nautilus-86c2k "
Nov 12 11:36:59.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:59.336: INFO: stderr: ""
Nov 12 11:36:59.336: INFO: stdout: "true"
Nov 12 11:36:59.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:59.477: INFO: stderr: ""
Nov 12 11:36:59.477: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:36:59.477: INFO: validating pod update-demo-nautilus-86c2k
Nov 12 11:36:59.479: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:36:59.479: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:36:59.479: INFO: update-demo-nautilus-86c2k is verified up and running
STEP: scaling up the replication controller
Nov 12 11:36:59.489: INFO: scanned /root for discovery docs: 
Nov 12 11:36:59.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7526'
Nov 12 11:36:59.637: INFO: stderr: ""
Nov 12 11:36:59.637: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 12 11:36:59.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:36:59.776: INFO: stderr: ""
Nov 12 11:36:59.776: INFO: stdout: "update-demo-nautilus-5l5rx update-demo-nautilus-86c2k "
Nov 12 11:36:59.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l5rx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:36:59.913: INFO: stderr: ""
Nov 12 11:36:59.913: INFO: stdout: ""
Nov 12 11:36:59.913: INFO: update-demo-nautilus-5l5rx is created but not running
Nov 12 11:37:04.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:37:05.056: INFO: stderr: ""
Nov 12 11:37:05.057: INFO: stdout: "update-demo-nautilus-5l5rx update-demo-nautilus-86c2k "
Nov 12 11:37:05.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l5rx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:37:05.207: INFO: stderr: ""
Nov 12 11:37:05.207: INFO: stdout: ""
Nov 12 11:37:05.207: INFO: update-demo-nautilus-5l5rx is created but not running
Nov 12 11:37:10.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7526'
Nov 12 11:37:10.329: INFO: stderr: ""
Nov 12 11:37:10.329: INFO: stdout: "update-demo-nautilus-5l5rx update-demo-nautilus-86c2k "
Nov 12 11:37:10.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l5rx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:37:10.458: INFO: stderr: ""
Nov 12 11:37:10.458: INFO: stdout: "true"
Nov 12 11:37:10.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5l5rx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:37:10.579: INFO: stderr: ""
Nov 12 11:37:10.579: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:37:10.579: INFO: validating pod update-demo-nautilus-5l5rx
Nov 12 11:37:10.584: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:37:10.584: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:37:10.584: INFO: update-demo-nautilus-5l5rx is verified up and running
Nov 12 11:37:10.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:37:10.719: INFO: stderr: ""
Nov 12 11:37:10.719: INFO: stdout: "true"
Nov 12 11:37:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-86c2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7526'
Nov 12 11:37:10.856: INFO: stderr: ""
Nov 12 11:37:10.856: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 12 11:37:10.856: INFO: validating pod update-demo-nautilus-86c2k
Nov 12 11:37:10.859: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 12 11:37:10.859: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 12 11:37:10.859: INFO: update-demo-nautilus-86c2k is verified up and running
STEP: using delete to clean up resources
Nov 12 11:37:10.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7526'
Nov 12 11:37:10.985: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 12 11:37:10.985: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Nov 12 11:37:10.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7526'
Nov 12 11:37:11.120: INFO: stderr: "No resources found in kubectl-7526 namespace.\n"
Nov 12 11:37:11.120: INFO: stdout: ""
Nov 12 11:37:11.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7526 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 12 11:37:11.260: INFO: stderr: ""
Nov 12 11:37:11.260: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:37:11.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7526" for this suite.

• [SLOW TEST:34.460 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":275,"skipped":4516,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:37:11.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1396
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-1396
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1396
Nov 12 11:37:11.296: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Nov 12 11:37:21.299: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Nov 12 11:37:21.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:37:21.556: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:37:21.557: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:37:21.557: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 11:37:21.560: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Nov 12 11:37:31.563: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 11:37:31.563: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:37:31.572: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:37:31.572: INFO: ss-0  node3  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:37:31.572: INFO: 
Nov 12 11:37:31.572: INFO: StatefulSet ss has not reached scale 3, at 1
Nov 12 11:37:32.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997917391s
Nov 12 11:37:33.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.994646546s
Nov 12 11:37:34.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.991438838s
Nov 12 11:37:35.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.987917758s
Nov 12 11:37:36.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.984227559s
Nov 12 11:37:37.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.981125533s
Nov 12 11:37:38.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.978280493s
Nov 12 11:37:39.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.975299245s
Nov 12 11:37:40.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 971.14076ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1396
Nov 12 11:37:41.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 11:37:41.823: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Nov 12 11:37:41.823: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 11:37:41.823: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 11:37:41.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 11:37:42.085: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Nov 12 11:37:42.085: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 11:37:42.085: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 11:37:42.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Nov 12 11:37:42.339: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Nov 12 11:37:42.339: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Nov 12 11:37:42.339: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Nov 12 11:37:42.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:37:42.342: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:37:42.342: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Nov 12 11:37:52.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:37:52.346: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 12 11:37:52.346: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Nov 12 11:37:52.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:37:52.585: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:37:52.585: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:37:52.585: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 11:37:52.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:37:52.816: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:37:52.816: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:37:52.816: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 11:37:52.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1396 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Nov 12 11:37:53.069: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Nov 12 11:37:53.069: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Nov 12 11:37:53.069: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Nov 12 11:37:53.069: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:37:53.071: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Nov 12 11:38:03.077: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 11:38:03.077: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 11:38:03.077: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Nov 12 11:38:03.085: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:03.085: INFO: ss-0  node3  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:03.085: INFO: ss-1  node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:03.085: INFO: ss-2  node4  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:03.085: INFO: 
Nov 12 11:38:03.085: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:04.089: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:04.089: INFO: ss-0  node3  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:04.089: INFO: ss-1  node1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:04.089: INFO: ss-2  node4  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:04.089: INFO: 
Nov 12 11:38:04.089: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:05.092: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:05.092: INFO: ss-0  node3  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:05.092: INFO: ss-1  node1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:05.092: INFO: ss-2  node4  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:05.092: INFO: 
Nov 12 11:38:05.092: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:06.095: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:06.095: INFO: ss-0  node3  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:06.095: INFO: ss-1  node1  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:06.095: INFO: ss-2  node4  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:06.096: INFO: 
Nov 12 11:38:06.096: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:07.098: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:07.098: INFO: ss-0  node3  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:07.098: INFO: ss-1  node1  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:07.098: INFO: ss-2  node4  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:07.098: INFO: 
Nov 12 11:38:07.098: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:08.101: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:08.101: INFO: ss-0  node3  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:11 +0000 UTC  }]
Nov 12 11:38:08.101: INFO: ss-1  node1  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:08.101: INFO: ss-2  node4  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:08.101: INFO: 
Nov 12 11:38:08.101: INFO: StatefulSet ss has not reached scale 0, at 3
Nov 12 11:38:09.104: INFO: POD   NODE   PHASE    GRACE  CONDITIONS
Nov 12 11:38:09.104: INFO: ss-2  node4  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-12 11:37:31 +0000 UTC  }]
Nov 12 11:38:09.104: INFO: 
Nov 12 11:38:09.104: INFO: StatefulSet ss has not reached scale 0, at 1
Nov 12 11:38:10.107: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.978022175s
Nov 12 11:38:11.109: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.975429027s
Nov 12 11:38:12.112: INFO: Verifying statefulset ss doesn't scale past 0 for another 972.59222ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1396
Nov 12 11:38:13.114: INFO: Scaling statefulset ss to 0
Nov 12 11:38:13.120: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Nov 12 11:38:13.122: INFO: Deleting all statefulset in ns statefulset-1396
Nov 12 11:38:13.123: INFO: Scaling statefulset ss to 0
Nov 12 11:38:13.128: INFO: Waiting for statefulset status.replicas updated to 0
Nov 12 11:38:13.130: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:38:13.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1396" for this suite.

• [SLOW TEST:61.867 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":276,"skipped":4523,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:38:13.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-bd65618d-24f9-46fe-8bfb-5b46d0ed52f4
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:38:13.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9311" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":277,"skipped":4533,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Nov 12 11:38:13.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Nov 12 11:38:13.177: INFO: Waiting up to 5m0s for pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f" in namespace "downward-api-9951" to be "success or failure"
Nov 12 11:38:13.179: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.317392ms
Nov 12 11:38:15.182: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004297787s
Nov 12 11:38:17.184: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006770089s
Nov 12 11:38:19.187: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009674316s
Nov 12 11:38:21.190: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.012620819s
Nov 12 11:38:23.193: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.015190081s
STEP: Saw pod success
Nov 12 11:38:23.193: INFO: Pod "downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f" satisfied condition "success or failure"
Nov 12 11:38:23.194: INFO: Trying to get logs from node node2 pod downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f container dapi-container: 
STEP: delete the pod
Nov 12 11:38:23.251: INFO: Waiting for pod downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f to disappear
Nov 12 11:38:23.253: INFO: Pod downward-api-74ed42ab-1952-4e6a-b4c8-67c516d7277f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Nov 12 11:38:23.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9951" for this suite.

• [SLOW TEST:10.096 seconds]
[sig-node] Downward API
/workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.17.13-rc.0.25+4a28e05c652659/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4550,"failed":0}
SSSSSSSSSSSSSSSSSNov 12 11:38:23.258: INFO: Running AfterSuite actions on all nodes
Nov 12 11:38:23.258: INFO: Running AfterSuite actions on node 1
Nov 12 11:38:23.258: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4567,"failed":0}

Ran 278 of 4845 Specs in 6371.463 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4567 Skipped
PASS