Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621882761 - Will randomize all specs Will run 5667 specs Running in parallel across 10 nodes May 24 18:59:22.971: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:22.975: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 18:59:22.999: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 18:59:23.045: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 18:59:23.045: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 18:59:23.045: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 18:59:23.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 24 18:59:23.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 18:59:23.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 24 18:59:23.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 18:59:23.056: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 24 18:59:23.056: INFO: e2e test version: v1.20.6 May 24 18:59:23.058: INFO: kube-apiserver version: v1.20.7 May 24 18:59:23.058: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.067: INFO: Cluster IP family: ipv4 May 24 18:59:23.059: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.078: INFO: Cluster IP family: ipv4 May 24 18:59:23.077: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.097: INFO: Cluster IP family: ipv4 May 24 18:59:23.079: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.105: INFO: Cluster IP family: ipv4 May 24 18:59:23.087: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.106: INFO: Cluster IP family: ipv4 May 24 18:59:23.098: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.116: INFO: Cluster IP family: ipv4 May 24 18:59:23.122: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.141: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 24 18:59:23.150: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.168: INFO: Cluster IP family: ipv4 May 24 18:59:23.153: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.168: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 24 18:59:23.158: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:23.173: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 24 18:59:23.206: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.210: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 18:59:23.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b" in namespace "projected-8924" to be "Succeeded or Failed" May 24 18:59:23.220: INFO: Pod "downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214882ms May 24 18:59:25.223: INFO: Pod "downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005387521s STEP: Saw pod success May 24 18:59:25.223: INFO: Pod "downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b" satisfied condition "Succeeded or Failed" May 24 18:59:25.226: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b container client-container: STEP: delete the pod May 24 18:59:25.632: INFO: Waiting for pod downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b to disappear May 24 18:59:25.635: INFO: Pod downwardapi-volume-420874c4-9279-49df-b9de-eee23eee651b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:25.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8924" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test May 24 18:59:23.168: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.170: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 18:59:23.176: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ab7846fc-7c34-41b6-b8ce-5cd9191bfe5a" in namespace "security-context-test-6232" to be "Succeeded or Failed" May 24 18:59:23.177: INFO: Pod "busybox-readonly-false-ab7846fc-7c34-41b6-b8ce-5cd9191bfe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.561317ms May 24 18:59:25.223: INFO: Pod "busybox-readonly-false-ab7846fc-7c34-41b6-b8ce-5cd9191bfe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047063363s May 24 18:59:27.227: INFO: Pod "busybox-readonly-false-ab7846fc-7c34-41b6-b8ce-5cd9191bfe5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050791609s May 24 18:59:27.227: INFO: Pod "busybox-readonly-false-ab7846fc-7c34-41b6-b8ce-5cd9191bfe5a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:27.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6232" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets May 24 18:59:23.245: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.248: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-276375cd-dc5f-47f7-916a-1ddb246c9ef5 STEP: Creating a pod to test consume secrets May 24 18:59:23.259: INFO: Waiting up to 5m0s for pod "pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249" in namespace "secrets-9179" to be "Succeeded or Failed" May 24 18:59:23.261: INFO: Pod "pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481075ms May 24 18:59:25.264: INFO: Pod "pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005778981s May 24 18:59:27.267: INFO: Pod "pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008859869s STEP: Saw pod success May 24 18:59:27.267: INFO: Pod "pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249" satisfied condition "Succeeded or Failed" May 24 18:59:27.275: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249 container secret-volume-test: STEP: delete the pod May 24 18:59:27.565: INFO: Waiting for pod pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249 to disappear May 24 18:59:27.568: INFO: Pod pod-secrets-834ddaa1-6ec3-4186-9fdb-43a5f27f5249 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:27.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9179" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:27.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 18:59:27.742: INFO: Checking APIGroup: apiregistration.k8s.io May 24 18:59:27.744: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 24 18:59:27.744: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.744: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 24 18:59:27.744: INFO: Checking APIGroup: apps May 24 18:59:27.745: INFO: PreferredVersion.GroupVersion: apps/v1 May 24 18:59:27.745: INFO: Versions found [{apps/v1 v1}] May 24 18:59:27.745: INFO: apps/v1 matches apps/v1 May 24 18:59:27.745: INFO: Checking APIGroup: events.k8s.io May 24 18:59:27.746: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 24 18:59:27.746: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.746: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 24 18:59:27.746: INFO: Checking APIGroup: authentication.k8s.io May 24 18:59:27.748: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 24 18:59:27.748: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.748: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 24 18:59:27.748: INFO: Checking APIGroup: authorization.k8s.io May 24 18:59:27.749: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 24 18:59:27.749: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.749: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 24 18:59:27.749: INFO: Checking APIGroup: autoscaling May 24 18:59:27.750: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 24 18:59:27.750: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 24 18:59:27.750: INFO: autoscaling/v1 matches autoscaling/v1 May 24 18:59:27.750: INFO: Checking APIGroup: batch May 24 18:59:27.752: INFO: PreferredVersion.GroupVersion: batch/v1 May 24 18:59:27.752: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 24 18:59:27.752: INFO: batch/v1 matches batch/v1 May 24 18:59:27.752: INFO: Checking APIGroup: certificates.k8s.io May 24 18:59:27.753: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 24 18:59:27.753: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.753: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 24 18:59:27.753: INFO: Checking APIGroup: networking.k8s.io May 24 18:59:27.754: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 24 18:59:27.754: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.754: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 24 18:59:27.754: INFO: Checking APIGroup: extensions May 24 18:59:27.755: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 24 18:59:27.755: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 24 18:59:27.755: INFO: extensions/v1beta1 matches extensions/v1beta1 May 24 18:59:27.755: INFO: Checking APIGroup: policy May 24 18:59:27.757: INFO: PreferredVersion.GroupVersion: policy/v1beta1 May 24 18:59:27.757: INFO: Versions found [{policy/v1beta1 v1beta1}] May 24 18:59:27.757: INFO: policy/v1beta1 matches policy/v1beta1 May 24 18:59:27.757: INFO: Checking APIGroup: rbac.authorization.k8s.io May 24 18:59:27.758: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 24 18:59:27.758: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.758: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 24 18:59:27.758: INFO: Checking APIGroup: storage.k8s.io May 24 18:59:27.759: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 24 18:59:27.759: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.759: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 24 18:59:27.759: INFO: Checking APIGroup: admissionregistration.k8s.io May 24 18:59:27.760: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 24 18:59:27.760: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.760: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 24 18:59:27.760: INFO: Checking APIGroup: apiextensions.k8s.io May 24 18:59:27.761: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 24 18:59:27.761: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.761: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 24 18:59:27.761: INFO: Checking APIGroup: scheduling.k8s.io May 24 18:59:27.762: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 24 18:59:27.762: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.762: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 24 18:59:27.762: INFO: Checking APIGroup: coordination.k8s.io May 24 18:59:27.763: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 24 18:59:27.763: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.763: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 24 18:59:27.763: INFO: Checking APIGroup: node.k8s.io May 24 18:59:27.764: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 May 24 18:59:27.764: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.764: INFO: node.k8s.io/v1 matches node.k8s.io/v1 May 24 18:59:27.764: INFO: Checking APIGroup: discovery.k8s.io May 24 18:59:27.766: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 May 24 18:59:27.766: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.766: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 May 24 18:59:27.766: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io May 24 18:59:27.767: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 May 24 18:59:27.767: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] May 24 18:59:27.767: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 May 24 18:59:27.767: INFO: Checking APIGroup: k8s.cni.cncf.io May 24 18:59:27.768: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 24 18:59:27.768: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 24 18:59:27.768: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 24 18:59:27.768: INFO: Checking APIGroup: projectcontour.io May 24 18:59:27.769: INFO: PreferredVersion.GroupVersion: projectcontour.io/v1 May 24 18:59:27.769: INFO: Versions found [{projectcontour.io/v1 v1} {projectcontour.io/v1alpha1 v1alpha1}] May 24 18:59:27.769: INFO: projectcontour.io/v1 matches projectcontour.io/v1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:27.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-9564" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:27.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-9a19a802-c601-46fb-b82e-a1ffe4ed7250 STEP: Creating a pod to test consume secrets May 24 18:59:27.671: INFO: Waiting up to 5m0s for pod "pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c" in namespace "secrets-3024" to be "Succeeded or Failed" May 24 18:59:27.673: INFO: Pod "pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170725ms May 24 18:59:29.681: INFO: Pod "pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009568408s STEP: Saw pod success May 24 18:59:29.681: INFO: Pod "pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c" satisfied condition "Succeeded or Failed" May 24 18:59:29.684: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c container secret-volume-test: STEP: delete the pod May 24 18:59:29.699: INFO: Waiting for pod pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c to disappear May 24 18:59:29.701: INFO: Pod pod-secrets-9e15b10c-d5fe-4ee1-9385-d30b02649e9c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:29.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3024" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":74,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 24 18:59:23.211: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.214: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod May 24 18:59:27.751: INFO: Successfully updated pod "labelsupdate5e8759f3-736d-450b-a6da-f37b0e3cdf2e" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:29.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4044" for this suite. • [SLOW TEST:6.654 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:25.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-projected-all-test-volume-9ed7af83-f35b-4c4b-b303-41a32e533b08 STEP: Creating secret with name secret-projected-all-test-volume-83c494bc-afe9-44c0-9695-84726973b023 STEP: Creating a pod to test Check all projections for projected volume plugin May 24 18:59:25.739: INFO: Waiting up to 5m0s for pod "projected-volume-144a7816-e7b4-4739-80ce-154998e07920" in namespace "projected-1683" to be "Succeeded or Failed" May 24 18:59:25.742: INFO: Pod "projected-volume-144a7816-e7b4-4739-80ce-154998e07920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129895ms May 24 18:59:27.745: INFO: Pod "projected-volume-144a7816-e7b4-4739-80ce-154998e07920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005426129s May 24 18:59:29.748: INFO: Pod "projected-volume-144a7816-e7b4-4739-80ce-154998e07920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008948354s STEP: Saw pod success May 24 18:59:29.748: INFO: Pod "projected-volume-144a7816-e7b4-4739-80ce-154998e07920" satisfied condition "Succeeded or Failed" May 24 18:59:29.751: INFO: Trying to get logs from node leguer-worker2 pod projected-volume-144a7816-e7b4-4739-80ce-154998e07920 container projected-all-volume-test: STEP: delete the pod May 24 18:59:29.839: INFO: Waiting for pod projected-volume-144a7816-e7b4-4739-80ce-154998e07920 to disappear May 24 18:59:29.842: INFO: Pod projected-volume-144a7816-e7b4-4739-80ce-154998e07920 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:29.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1683" for this suite. •S ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:29.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 24 18:59:29.922: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3785 fd6dc22e-5857-490d-9e90-9888dd682e05 814484 0 2021-05-24 18:59:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 18:59:29.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3785 fd6dc22e-5857-490d-9e90-9888dd682e05 814486 0 2021-05-24 18:59:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:29.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3785" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota May 24 18:59:23.152: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.163: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:30.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-830" for this suite. • [SLOW TEST:7.063 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange May 24 18:59:23.232: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.235: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 24 18:59:23.242: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 24 18:59:23.247: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 24 18:59:23.248: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 24 18:59:23.254: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 24 18:59:23.254: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 24 18:59:23.260: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 24 18:59:23.260: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 24 18:59:30.287: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:30.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-98" for this suite. • [SLOW TEST:7.099 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":1,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:27.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 18:59:27.836: INFO: Waiting up to 5m0s for pod "pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec" in namespace "emptydir-3760" to be "Succeeded or Failed" May 24 18:59:27.839: INFO: Pod "pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621554ms May 24 18:59:29.842: INFO: Pod "pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005428509s May 24 18:59:31.845: INFO: Pod "pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009094727s STEP: Saw pod success May 24 18:59:31.845: INFO: Pod "pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec" satisfied condition "Succeeded or Failed" May 24 18:59:31.848: INFO: Trying to get logs from node leguer-worker pod pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec container test-container: STEP: delete the pod May 24 18:59:31.864: INFO: Waiting for pod pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec to disappear May 24 18:59:31.867: INFO: Pod pod-9d6fb88f-bdcf-4515-a7ce-bb897b28b0ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:31.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3760" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 24 18:59:23.232: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.235: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:23.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:25.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:27.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479563, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 18:59:30.972: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:32.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-569" for this suite. STEP: Destroying namespace "webhook-569-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":73,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:29.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-b0a730f0-58de-4653-a4c0-0d6dda79a1c5 STEP: Creating a pod to test consume configMaps May 24 18:59:29.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f" in namespace "configmap-9010" to be "Succeeded or Failed" May 24 18:59:29.997: INFO: Pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906451ms May 24 18:59:32.000: INFO: Pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006536227s May 24 18:59:34.004: INFO: Pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010371644s May 24 18:59:36.008: INFO: Pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013763976s STEP: Saw pod success May 24 18:59:36.008: INFO: Pod "pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f" satisfied condition "Succeeded or Failed" May 24 18:59:36.011: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f container agnhost-container: STEP: delete the pod May 24 18:59:36.024: INFO: Waiting for pod pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f to disappear May 24 18:59:36.027: INFO: Pod pod-configmaps-dcd10192-de82-44ad-8a84-90cbcfa97f8f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:36.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9010" for this suite. • [SLOW TEST:6.084 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:30.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9510.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9510.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 18:59:36.402: INFO: DNS probes using dns-9510/dns-test-a6532b6b-d236-42ca-97e7-b15ce0ffbd03 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:36.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9510" for this suite. • [SLOW TEST:6.091 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":2,"skipped":60,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:31.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-1b008cff-2064-457b-96c8-cf660eedf1b4 STEP: Creating a pod to test consume configMaps May 24 18:59:31.935: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08" in namespace "projected-4155" to be "Succeeded or Failed" May 24 18:59:31.938: INFO: Pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914699ms May 24 18:59:33.943: INFO: Pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673946s May 24 18:59:35.946: INFO: Pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01098364s May 24 18:59:37.950: INFO: Pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014125542s STEP: Saw pod success May 24 18:59:37.950: INFO: Pod "pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08" satisfied condition "Succeeded or Failed" May 24 18:59:37.952: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08 container projected-configmap-volume-test: STEP: delete the pod May 24 18:59:37.965: INFO: Waiting for pod pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08 to disappear May 24 18:59:37.967: INFO: Pod pod-projected-configmaps-0a3039ea-646c-41ac-a5e0-aa453bd64e08 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:37.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4155" for this suite. • [SLOW TEST:6.091 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:38.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:38.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7539" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":5,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:30.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:38.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-615" for this suite. • [SLOW TEST:8.056 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":85,"failed":0} SSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:38.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pods May 24 18:59:38.429: INFO: created test-pod-1 May 24 18:59:38.433: INFO: created test-pod-2 May 24 18:59:38.436: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:38.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7685" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":3,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:38.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching May 24 18:59:38.543: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating May 24 18:59:38.562: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:38.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2399" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:36.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:40.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8348" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:36.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 18:59:41.739: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:41.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7970" for this suite. • [SLOW TEST:5.142 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":175,"failed":0} SSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":4,"skipped":105,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 18:59:38.633: INFO: Waiting up to 5m0s for pod "pod-18272727-bba9-4865-9710-b216e95f020e" in namespace "emptydir-2423" to be "Succeeded or Failed" May 24 18:59:38.636: INFO: Pod "pod-18272727-bba9-4865-9710-b216e95f020e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948108ms May 24 18:59:40.639: INFO: Pod "pod-18272727-bba9-4865-9710-b216e95f020e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005901339s May 24 18:59:42.642: INFO: Pod "pod-18272727-bba9-4865-9710-b216e95f020e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009269809s STEP: Saw pod success May 24 18:59:42.642: INFO: Pod "pod-18272727-bba9-4865-9710-b216e95f020e" satisfied condition "Succeeded or Failed" May 24 18:59:42.645: INFO: Trying to get logs from node leguer-worker2 pod pod-18272727-bba9-4865-9710-b216e95f020e container test-container: STEP: delete the pod May 24 18:59:42.657: INFO: Waiting for pod pod-18272727-bba9-4865-9710-b216e95f020e to disappear May 24 18:59:42.660: INFO: Pod pod-18272727-bba9-4865-9710-b216e95f020e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2423" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":105,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services May 24 18:59:23.119: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.127: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4088 STEP: creating service affinity-clusterip in namespace services-4088 STEP: creating replication controller affinity-clusterip in namespace services-4088 I0524 18:59:23.140047 27 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4088, replica count: 3 I0524 18:59:26.190411 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 18:59:29.190733 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 18:59:29.196: INFO: Creating new exec pod May 24 18:59:36.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4088 exec execpod-affinitymcs9n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 24 18:59:36.538: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 24 18:59:36.538: INFO: stdout: "" May 24 18:59:36.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4088 exec execpod-affinitymcs9n -- /bin/sh -x -c nc -zv -t -w 2 10.96.247.210 80' May 24 18:59:36.762: INFO: stderr: "+ nc -zv -t -w 2 10.96.247.210 80\nConnection to 10.96.247.210 80 port [tcp/http] succeeded!\n" May 24 18:59:36.762: INFO: stdout: "" May 24 18:59:36.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4088 exec execpod-affinitymcs9n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.247.210:80/ ; done' May 24 18:59:37.169: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.247.210:80/\n" May 24 18:59:37.169: INFO: stdout: "\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67\naffinity-clusterip-dhl67" May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Received response from host: affinity-clusterip-dhl67 May 24 18:59:37.169: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4088, will wait for the garbage collector to delete the pods May 24 18:59:37.233: INFO: Deleting ReplicationController affinity-clusterip took: 3.962569ms May 24 18:59:37.334: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.322841ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:44.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4088" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:20.967 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:32.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:32.677: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:34.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:36.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479572, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 18:59:39.700: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 24 18:59:45.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=webhook-9962 attach --namespace=webhook-9962 to-be-attached-pod -i -c=container1' May 24 18:59:45.890: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:45.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9962" for this suite. STEP: Destroying namespace "webhook-9962-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.719 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":2,"skipped":87,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:38.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready May 24 18:59:38.191: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.191: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.197: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.197: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.207: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.207: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.218: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:38.218: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 24 18:59:41.169: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 24 18:59:41.169: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 24 18:59:42.370: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment May 24 18:59:42.377: INFO: observed event type ADDED STEP: waiting for Replicas to scale May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 0 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:42.379: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.380: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.380: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.380: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.385: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.385: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.399: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.399: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 2 May 24 18:59:42.410: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 STEP: listing Deployments May 24 18:59:42.414: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment May 24 18:59:42.427: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 STEP: fetching the DeploymentStatus May 24 18:59:42.435: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.439: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.446: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.464: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.473: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.478: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.487: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 24 18:59:42.491: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 May 24 18:59:46.185: INFO: observed Deployment test-deployment in namespace deployment-1494 with ReadyReplicas 1 STEP: deleting the Deployment May 24 18:59:46.193: INFO: observed event type MODIFIED May 24 18:59:46.193: INFO: observed event type MODIFIED May 24 18:59:46.193: INFO: observed event type MODIFIED May 24 18:59:46.193: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED May 24 18:59:46.194: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 18:59:46.196: INFO: Log out all the ReplicaSets if there is no deployment created May 24 18:59:46.198: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-1494 96222118-8da6-4648-a13d-d2aa9fdb5abc 815502 4 2021-05-24 18:59:42 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 05e81b6f-c35f-4206-87c4-fe7565f47e95 0xc004a35ae7 0xc004a35ae8}] [] [{kube-controller-manager Update apps/v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05e81b6f-c35f-4206-87c4-fe7565f47e95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a35b68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:46.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1494" for this suite. • [SLOW TEST:8.059 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath May 24 18:59:23.120: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.127: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-mllr STEP: Creating a pod to test atomic-volume-subpath May 24 18:59:23.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mllr" in namespace "subpath-1740" to be "Succeeded or Failed" May 24 18:59:23.149: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Pending", Reason="", readiness=false. Elapsed: 1.977931ms May 24 18:59:25.152: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 2.005606964s May 24 18:59:27.156: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 4.009298168s May 24 18:59:29.160: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 6.012835799s May 24 18:59:31.163: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 8.016173431s May 24 18:59:33.166: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 10.0195789s May 24 18:59:35.170: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 12.023163436s May 24 18:59:37.173: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 14.026342553s May 24 18:59:39.177: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 16.02979733s May 24 18:59:41.180: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 18.032949254s May 24 18:59:43.183: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 20.036024473s May 24 18:59:45.186: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 22.039427711s May 24 18:59:47.190: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Running", Reason="", readiness=true. Elapsed: 24.042824572s May 24 18:59:49.193: INFO: Pod "pod-subpath-test-configmap-mllr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.046052922s STEP: Saw pod success May 24 18:59:49.193: INFO: Pod "pod-subpath-test-configmap-mllr" satisfied condition "Succeeded or Failed" May 24 18:59:49.196: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-mllr container test-container-subpath-configmap-mllr: STEP: delete the pod May 24 18:59:49.210: INFO: Waiting for pod pod-subpath-test-configmap-mllr to disappear May 24 18:59:49.213: INFO: Pod pod-subpath-test-configmap-mllr no longer exists STEP: Deleting pod pod-subpath-test-configmap-mllr May 24 18:59:49.213: INFO: Deleting pod "pod-subpath-test-configmap-mllr" in namespace "subpath-1740" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:49.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1740" for this suite. • [SLOW TEST:26.127 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:44.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-c7108979-4fbd-4986-a3c6-8c901f403413 STEP: Creating a pod to test consume secrets May 24 18:59:44.143: INFO: Waiting up to 5m0s for pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac" in namespace "secrets-7549" to be "Succeeded or Failed" May 24 18:59:44.145: INFO: Pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac": Phase="Pending", Reason="", readiness=false. Elapsed: 1.961543ms May 24 18:59:46.148: INFO: Pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005525524s May 24 18:59:48.152: INFO: Pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009436087s May 24 18:59:50.221: INFO: Pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077974642s STEP: Saw pod success May 24 18:59:50.221: INFO: Pod "pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac" satisfied condition "Succeeded or Failed" May 24 18:59:50.224: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac container secret-volume-test: STEP: delete the pod May 24 18:59:50.240: INFO: Waiting for pod pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac to disappear May 24 18:59:50.243: INFO: Pod pod-secrets-73bdf930-3185-4992-89ba-f92edac753ac no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:50.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7549" for this suite. STEP: Destroying namespace "secret-namespace-1163" for this suite. • [SLOW TEST:6.186 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:42.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 24 18:59:48.721: INFO: &Pod{ObjectMeta:{send-events-aa37e03a-4a36-44d2-86bb-df59f0404982 events-9424 27b0048a-5f0f-4746-8a1d-706d0374c628 815606 0 2021-05-24 18:59:42 +0000 UTC map[name:foo time:703592765] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.194" ], "mac": "9e:6f:d8:1b:66:ca", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.194" ], "mac": "9e:6f:d8:1b:66:ca", "default": true, "dns": {} }]] [] [] [{e2e.test Update v1 2021-05-24 18:59:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4nd86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4nd86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4nd86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.194,StartTime:2021-05-24 18:59:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://5aa1d249b00cbcfe3ed6d7f9b5a7ba19885ba2f8733344be64b42c7abddad1ba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 24 18:59:50.726: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 24 18:59:52.730: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9424" for this suite. • [SLOW TEST:10.070 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":6,"skipped":109,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:41.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 24 18:59:42.172: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:42.188: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:44.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:46.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:48.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:50.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:52.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479582, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 18:59:55.234: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:55.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7681" for this suite. STEP: Destroying namespace "webhook-7681-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.498 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:49.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 18:59:49.345: INFO: Creating deployment "test-recreate-deployment" May 24 18:59:49.349: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 24 18:59:49.355: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 24 18:59:51.362: INFO: Waiting deployment "test-recreate-deployment" to complete May 24 18:59:51.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:53.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479589, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:55.368: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 24 18:59:55.373: INFO: Updating deployment test-recreate-deployment May 24 18:59:55.373: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 18:59:55.412: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7404 343b6351-7738-4ee0-9fa2-410699ad1053 816020 2 2021-05-24 18:59:49 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b00578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-05-24 18:59:55 +0000 UTC,LastTransitionTime:2021-05-24 18:59:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-05-24 18:59:55 +0000 UTC,LastTransitionTime:2021-05-24 18:59:49 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 24 18:59:55.415: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-7404 5388a8ba-db77-4319-8d50-60f2313b2bc8 816018 1 2021-05-24 18:59:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 343b6351-7738-4ee0-9fa2-410699ad1053 0xc004b009c0 0xc004b009c1}] [] [{kube-controller-manager Update apps/v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"343b6351-7738-4ee0-9fa2-410699ad1053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b00a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 18:59:55.415: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 24 18:59:55.415: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-7404 e576636e-320b-42fe-8be6-1d8322300f19 816008 2 2021-05-24 18:59:49 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 343b6351-7738-4ee0-9fa2-410699ad1053 0xc004b008c7 0xc004b008c8}] [] [{kube-controller-manager Update apps/v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"343b6351-7738-4ee0-9fa2-410699ad1053\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b00958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 18:59:55.418: INFO: Pod "test-recreate-deployment-f79dd4667-vvlbn" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-vvlbn test-recreate-deployment-f79dd4667- deployment-7404 4c17811c-237b-45b9-89b7-8a39a40eb22c 816015 0 2021-05-24 18:59:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 5388a8ba-db77-4319-8d50-60f2313b2bc8 0xc004b00e30 0xc004b00e31}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5388a8ba-db77-4319-8d50-60f2313b2bc8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7dl6x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7dl6x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7dl6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:55.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7404" for this suite. • [SLOW TEST:6.109 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":2,"skipped":68,"failed":0} SSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":190,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:55.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create deployment with httpd image May 24 18:59:55.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6717 create -f -' May 24 18:59:55.670: INFO: stderr: "" May 24 18:59:55.670: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 24 18:59:55.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6717 diff -f -' May 24 18:59:56.158: INFO: rc: 1 May 24 18:59:56.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6717 delete -f -' May 24 18:59:56.270: INFO: stderr: "" May 24 18:59:56.270: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:56.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6717" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":190,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:50.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-d1100b81-e2d9-4ea8-99a4-d23e94791460 STEP: Creating a pod to test consume secrets May 24 18:59:50.441: INFO: Waiting up to 5m0s for pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9" in namespace "secrets-1187" to be "Succeeded or Failed" May 24 18:59:50.444: INFO: Pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576496ms May 24 18:59:52.448: INFO: Pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006525668s May 24 18:59:54.451: INFO: Pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009930435s May 24 18:59:56.454: INFO: Pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012467433s STEP: Saw pod success May 24 18:59:56.454: INFO: Pod "pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9" satisfied condition "Succeeded or Failed" May 24 18:59:56.456: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9 container secret-env-test: STEP: delete the pod May 24 18:59:56.466: INFO: Waiting for pod pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9 to disappear May 24 18:59:56.468: INFO: Pod pod-secrets-b9204886-1a78-4440-ad10-66224a3bd3e9 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:56.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1187" for this suite. • [SLOW TEST:6.172 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:46.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 18:59:46.268: INFO: Creating deployment "webserver-deployment" May 24 18:59:46.272: INFO: Waiting for observed generation 1 May 24 18:59:48.280: INFO: Waiting for all required pods to come up May 24 18:59:48.285: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 24 18:59:56.292: INFO: Waiting for deployment "webserver-deployment" to complete May 24 18:59:56.298: INFO: Updating deployment "webserver-deployment" with a non-existent image May 24 18:59:56.305: INFO: Updating deployment webserver-deployment May 24 18:59:56.305: INFO: Waiting for observed generation 2 May 24 18:59:58.311: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 24 18:59:58.314: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 24 18:59:58.317: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 18:59:58.327: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 24 18:59:58.327: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 24 18:59:58.330: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 18:59:58.335: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 24 18:59:58.335: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 24 18:59:58.346: INFO: Updating deployment webserver-deployment May 24 18:59:58.346: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 24 18:59:58.351: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 24 18:59:58.356: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 18:59:58.362: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9190 135d665b-3d83-4b04-84ea-eb8b38a18c93 816223 3 2021-05-24 18:59:46 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d23c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-24 18:59:55 +0000 UTC,LastTransitionTime:2021-05-24 18:59:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-05-24 18:59:56 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 24 18:59:58.365: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9190 ad80166a-202c-40ea-9b9a-653344a619ac 816226 3 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 135d665b-3d83-4b04-84ea-eb8b38a18c93 0xc004d23fb7 0xc004d23fb8}] [] [{kube-controller-manager Update apps/v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"135d665b-3d83-4b04-84ea-eb8b38a18c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0004928e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 18:59:58.365: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 24 18:59:58.365: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9190 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 816224 3 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 135d665b-3d83-4b04-84ea-eb8b38a18c93 0xc000492b77 0xc000492b78}] [] [{kube-controller-manager Update apps/v1 2021-05-24 18:59:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"135d665b-3d83-4b04-84ea-eb8b38a18c93\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000492d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 24 18:59:58.372: INFO: Pod "webserver-deployment-795d758f88-8g8pw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8g8pw webserver-deployment-795d758f88- deployment-9190 3f56391a-5015-4665-9eff-c7976311030d 816237 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396577 0xc001396578}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.372: INFO: Pod "webserver-deployment-795d758f88-9jmhk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9jmhk webserver-deployment-795d758f88- deployment-9190 1a34f967-3fbf-4f94-8ad0-24ed82222119 816169 0 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.205" ], "mac": "ea:f1:0f:d6:7f:c8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.205" ], "mac": "ea:f1:0f:d6:7f:c8", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc0013966b0 0xc0013966b1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.373: INFO: Pod "webserver-deployment-795d758f88-bgrd5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bgrd5 webserver-deployment-795d758f88- deployment-9190 c80ff36e-9ec2-4acc-bcee-6d8c89ef994c 816188 0 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.23" ], "mac": "42:7a:62:d0:66:2c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.23" ], "mac": "42:7a:62:d0:66:2c", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396800 0xc001396801}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.373: INFO: Pod "webserver-deployment-795d758f88-ll5v5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ll5v5 webserver-deployment-795d758f88- deployment-9190 55748bc6-aa63-4e92-ae4f-6b68f380029d 816186 0 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.22" ], "mac": "12:17:40:74:a9:2f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.22" ], "mac": "12:17:40:74:a9:2f", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396960 0xc001396961}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.373: INFO: Pod "webserver-deployment-795d758f88-mzbpn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mzbpn webserver-deployment-795d758f88- deployment-9190 9a1bb9a2-2cbb-495d-bd34-4925b0da2b9b 816228 0 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.21" ], "mac": "5a:d9:5d:de:f9:3f", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.21" ], "mac": "5a:d9:5d:de:f9:3f", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396ab0 0xc001396ab1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-05-24 18:59:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.374: INFO: Pod "webserver-deployment-795d758f88-nrfbr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nrfbr webserver-deployment-795d758f88- deployment-9190 34d24f5a-67a9-4ded-be46-52d7bdfa0020 816181 0 2021-05-24 18:59:56 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.206" ], "mac": "a6:1e:07:a9:17:59", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.206" ], "mac": "a6:1e:07:a9:17:59", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396c80 0xc001396c81}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.374: INFO: Pod "webserver-deployment-795d758f88-vwkms" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vwkms webserver-deployment-795d758f88- deployment-9190 654386e0-4044-4622-b44d-6dc071170f03 816238 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396dd0 0xc001396dd1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.374: INFO: Pod "webserver-deployment-795d758f88-vzddw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vzddw webserver-deployment-795d758f88- deployment-9190 6e52e73a-4f3e-4feb-aa8b-950a28db4c32 816236 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 ad80166a-202c-40ea-9b9a-653344a619ac 0xc001396f30 0xc001396f31}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad80166a-202c-40ea-9b9a-653344a619ac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.375: INFO: Pod "webserver-deployment-dd94f59b7-4gz7n" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4gz7n webserver-deployment-dd94f59b7- deployment-9190 10ce69d7-7453-4cb7-98ea-af5920d54948 815954 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.16" ], "mac": "ae:e0:58:78:2e:b0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.16" ], "mac": "ae:e0:58:78:2e:b0", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397050 0xc001397051}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.16,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://17364c9d020802130c75c6ac8fda9aa046f9d091614a280ac51a70e6d9e6f7e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.375: INFO: Pod "webserver-deployment-dd94f59b7-4vrgg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4vrgg webserver-deployment-dd94f59b7- deployment-9190 832ed5da-e9aa-4ef4-b05e-2c10c55461ee 815988 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.15" ], "mac": "2e:2e:df:d7:b4:ef", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.15" ], "mac": "2e:2e:df:d7:b4:ef", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397240 0xc001397241}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.15,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6b3d8cf5590cbb15a091494731d7c44d1583f449f95fe4d6cff77491f0451235,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.375: INFO: Pod "webserver-deployment-dd94f59b7-5nlbl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5nlbl webserver-deployment-dd94f59b7- deployment-9190 7e9015d0-2b5b-4de0-86eb-5df455076364 815877 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.200" ], "mac": "82:41:a8:18:ed:91", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.200" ], "mac": "82:41:a8:18:ed:91", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397410 0xc001397411}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.200\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.200,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://37fbccee1d0a75b6109faf17e03e37764c0d3a66536e50763ca0c37e4a25d8ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.375: INFO: Pod "webserver-deployment-dd94f59b7-7x445" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7x445 webserver-deployment-dd94f59b7- deployment-9190 906fe184-446c-48ad-9110-7cd1e4f8c622 816241 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc0013975e0 0xc0013975e1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.376: INFO: Pod "webserver-deployment-dd94f59b7-8bfh7" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8bfh7 webserver-deployment-dd94f59b7- deployment-9190 09ec6bc6-33d6-45c9-93a0-11f5faa5b568 815914 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.197" ], "mac": "e6:3b:87:cf:c6:b6", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.197" ], "mac": "e6:3b:87:cf:c6:b6", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397710 0xc001397711}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.197,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9d776982f891ba58d83e1f3ad5944ef3df30cfbcb56b28d727a1ac7d2afc0333,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.376: INFO: Pod "webserver-deployment-dd94f59b7-964rh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-964rh webserver-deployment-dd94f59b7- deployment-9190 115023ff-26d9-4d14-aeca-e2759ddb0db0 816240 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc0013978d0 0xc0013978d1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.376: INFO: Pod "webserver-deployment-dd94f59b7-fcqrl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fcqrl webserver-deployment-dd94f59b7- deployment-9190 2e2942ed-8b4d-42a4-8952-5448c638b512 815928 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.18" ], "mac": "72:1b:11:42:98:d7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.18" ], "mac": "72:1b:11:42:98:d7", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397a00 0xc001397a01}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.18,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f29671a0bb50050a49b93f46093c2a43b6630bf5dcfc9887d0727c0f6f29eac9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.377: INFO: Pod "webserver-deployment-dd94f59b7-jdt4h" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jdt4h webserver-deployment-dd94f59b7- deployment-9190 facf184b-d413-4d4d-b0d1-7dd4f5c82869 816244 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397be0 0xc001397be1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.377: INFO: Pod "webserver-deployment-dd94f59b7-qjgg6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qjgg6 webserver-deployment-dd94f59b7- deployment-9190 cb98f5b9-e7b8-44af-907e-26085e3da9a9 816246 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397cf0 0xc001397cf1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.377: INFO: Pod "webserver-deployment-dd94f59b7-rcql8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rcql8 webserver-deployment-dd94f59b7- deployment-9190 5d7ae95d-2ab3-44f0-a612-78f4acb2afb3 816243 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397e00 0xc001397e01}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.377: INFO: Pod "webserver-deployment-dd94f59b7-sc92t" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sc92t webserver-deployment-dd94f59b7- deployment-9190 078f20b5-0bf1-4626-ab13-733c899b6297 816242 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc001397f20 0xc001397f21}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.378: INFO: Pod "webserver-deployment-dd94f59b7-t2k46" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t2k46 webserver-deployment-dd94f59b7- deployment-9190 fcad2b94-2b25-479b-8e3e-288c070365a3 815858 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.199" ], "mac": "12:ab:b0:49:50:0a", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.199" ], "mac": "12:ab:b0:49:50:0a", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc0009d4bb0 0xc0009d4bb1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.199\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.199,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6fcae9e46387c07fbade5cf6471b145ec9c00517c64fb7d0b9b22449bfb87f35,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.378: INFO: Pod "webserver-deployment-dd94f59b7-t68hr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t68hr webserver-deployment-dd94f59b7- deployment-9190 367cda5f-fc65-4a3b-9a88-d9b4f5f5f8c6 816072 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.17" ], "mac": "fa:b5:de:a6:ac:af", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.17" ], "mac": "fa:b5:de:a6:ac:af", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc0009d51d0 0xc0009d51d1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.17,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://05e22cc8356eab1bdb6cca8600ad9c4139feade577dcdb9ca1826237e8a6765e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.378: INFO: Pod "webserver-deployment-dd94f59b7-tsn5s" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tsn5s webserver-deployment-dd94f59b7- deployment-9190 e26af273-4cb3-409b-ba43-fbc4b3ddb6dc 815918 0 2021-05-24 18:59:46 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.198" ], "mac": "16:f3:09:59:49:56", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.198" ], "mac": "16:f3:09:59:49:56", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc0009d5cd0 0xc0009d5cd1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 18:59:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 18:59:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.198\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.198,StartTime:2021-05-24 18:59:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 18:59:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b610f0adbbb063f39a720a199297faf6a7c9b725e304a13846761720228f7c99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 18:59:58.378: INFO: Pod "webserver-deployment-dd94f59b7-ttl5r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ttl5r webserver-deployment-dd94f59b7- deployment-9190 74f95e9a-9f43-4b66-ab1c-78e6258a08a3 816232 0 2021-05-24 18:59:58 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8 0xc00058abf0 0xc00058abf1}] [] [{kube-controller-manager Update v1 2021-05-24 18:59:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1a0f88a-5d9e-42c3-9a98-28bdcdd85cd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlwdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlwdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 18:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 18:59:58.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9190" for this suite. • [SLOW TEST:12.143 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":7,"skipped":125,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:29.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-2065 STEP: creating service affinity-nodeport-transition in namespace services-2065 STEP: creating replication controller affinity-nodeport-transition in namespace services-2065 I0524 18:59:29.898833 28 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2065, replica count: 3 I0524 18:59:32.949291 28 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 18:59:35.949571 28 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 18:59:35.958: INFO: Creating new exec pod May 24 18:59:40.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 24 18:59:41.467: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 24 18:59:41.467: INFO: stdout: "" May 24 18:59:41.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c nc -zv -t -w 2 10.96.178.196 80' May 24 18:59:41.725: INFO: stderr: "+ nc -zv -t -w 2 10.96.178.196 80\nConnection to 10.96.178.196 80 port [tcp/http] succeeded!\n" May 24 18:59:41.725: INFO: stdout: "" May 24 18:59:41.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 32297' May 24 18:59:41.953: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 32297\nConnection to 172.18.0.7 32297 port [tcp/32297] succeeded!\n" May 24 18:59:41.953: INFO: stdout: "" May 24 18:59:41.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 32297' May 24 18:59:42.190: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 32297\nConnection to 172.18.0.5 32297 port [tcp/32297] succeeded!\n" May 24 18:59:42.190: INFO: stdout: "" May 24 18:59:42.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32297/ ; done' May 24 18:59:42.575: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n" May 24 18:59:42.576: INFO: stdout: "\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-spsfq\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-spsfq\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-spsfq\naffinity-nodeport-transition-qg9t8\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-spsfq\naffinity-nodeport-transition-pwmj7" May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-spsfq May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-spsfq May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-spsfq May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-qg9t8 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-spsfq May 24 18:59:42.576: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-2065 exec execpod-affinityzfqw8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32297/ ; done' May 24 18:59:42.890: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32297/\n" May 24 18:59:42.890: INFO: stdout: "\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7\naffinity-nodeport-transition-pwmj7" May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Received response from host: affinity-nodeport-transition-pwmj7 May 24 18:59:42.890: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2065, will wait for the garbage collector to delete the pods May 24 18:59:42.956: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.063999ms May 24 18:59:43.657: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 700.275977ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:00.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2065" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:30.623 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:23.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi May 24 18:59:23.168: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 18:59:23.171: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 24 18:59:23.174: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 24 18:59:40.338: INFO: >>> kubeConfig: /root/.kube/config May 24 18:59:44.899: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:01.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-372" for this suite. • [SLOW TEST:38.047 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:41.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 24 18:59:41.682: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:41.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:43.704: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:45.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:47.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479581, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 18:59:50.715: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:01.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4394" for this suite. STEP: Destroying namespace "webhook-4394-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.889 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:01.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:02.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9739" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:52.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 24 18:59:52.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 815897 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 18:59:52.873: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 815898 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 18:59:52.873: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 815899 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 24 19:00:02.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 816528 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:00:02.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 816529 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:00:02.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4802 eee79fe6-26d6-411b-9e98-7dac35ebfdfb 816530 0 2021-05-24 18:59:52 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-05-24 18:59:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:02.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4802" for this suite. • [SLOW TEST:10.265 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":7,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:03.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:03.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1429" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":150,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:56.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-1b32ae81-6905-4968-87d6-8a6f941c9b4f STEP: Creating a pod to test consume secrets May 24 18:59:56.580: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02" in namespace "projected-6739" to be "Succeeded or Failed" May 24 18:59:56.582: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531897ms May 24 18:59:58.586: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006049036s May 24 19:00:00.593: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013511411s May 24 19:00:02.597: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017188389s May 24 19:00:04.600: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020298587s STEP: Saw pod success May 24 19:00:04.600: INFO: Pod "pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02" satisfied condition "Succeeded or Failed" May 24 19:00:04.603: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02 container projected-secret-volume-test: STEP: delete the pod May 24 19:00:04.613: INFO: Waiting for pod pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02 to disappear May 24 19:00:04.616: INFO: Pod pod-projected-secrets-57e278f4-660c-4727-9b2f-88b1b5de1c02 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:04.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6739" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":82,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:55.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:55.967: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:57.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 18:59:59.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:01.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:03.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479595, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:00:06.985: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-635" for this suite. STEP: Destroying namespace "webhook-635-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":3,"skipped":76,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:56.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 24 18:59:56.797: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 18:59:56.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 18:59:58.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:00.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:02.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:04.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479596, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:00:07.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:07.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6435-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2281" for this suite. STEP: Destroying namespace "webhook-2281-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.644 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":6,"skipped":201,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:29.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7914" for this suite. • [SLOW TEST:39.584 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":94,"failed":0} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:01.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:01.216: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806" in namespace "security-context-test-6437" to be "Succeeded or Failed" May 24 19:00:01.219: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.947254ms May 24 19:00:03.225: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008382639s May 24 19:00:05.229: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012208704s May 24 19:00:07.232: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015740631s May 24 19:00:09.235: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019099528s May 24 19:00:11.239: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022194734s May 24 19:00:11.239: INFO: Pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806" satisfied condition "Succeeded or Failed" May 24 19:00:11.244: INFO: Got logs for pod "busybox-privileged-false-afad3456-4aef-4efc-9a3d-52337c485806": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:11.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6437" for this suite. • [SLOW TEST:10.070 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:00.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:00.532: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 24 19:00:00.538: INFO: Pod name sample-pod: Found 0 pods out of 1 May 24 19:00:05.542: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 19:00:09.549: INFO: Creating deployment "test-rolling-update-deployment" May 24 19:00:09.555: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 24 19:00:09.561: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 24 19:00:11.569: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 24 19:00:11.572: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 19:00:11.582: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3958 723a465b-5f35-46ba-b84c-f656ee930da9 817082 1 2021-05-24 19:00:09 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-05-24 19:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 19:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040998e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-24 19:00:09 +0000 UTC,LastTransitionTime:2021-05-24 19:00:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-05-24 19:00:11 +0000 UTC,LastTransitionTime:2021-05-24 19:00:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 24 19:00:11.586: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-3958 6f296a8e-4e9d-434d-beeb-cab1872c9ddf 817072 1 2021-05-24 19:00:09 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 723a465b-5f35-46ba-b84c-f656ee930da9 0xc004099d77 0xc004099d78}] [] [{kube-controller-manager Update apps/v1 2021-05-24 19:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"723a465b-5f35-46ba-b84c-f656ee930da9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004099e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 19:00:11.586: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 24 19:00:11.586: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3958 98f728b2-7afd-4031-976b-e1795311c6bd 817081 2 2021-05-24 19:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 723a465b-5f35-46ba-b84c-f656ee930da9 0xc004099c67 0xc004099c68}] [] [{e2e.test Update apps/v1 2021-05-24 19:00:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 19:00:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"723a465b-5f35-46ba-b84c-f656ee930da9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004099d08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 19:00:11.590: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-l2vfq" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-l2vfq test-rolling-update-deployment-6b6bf9df46- deployment-3958 29576abf-caf4-4f20-b2c9-36697c4810df 817071 0 2021-05-24 19:00:09 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.221" ], "mac": "f2:bb:a3:4c:91:4b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.221" ], "mac": "f2:bb:a3:4c:91:4b", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 6f296a8e-4e9d-434d-beeb-cab1872c9ddf 0xc0046101f7 0xc0046101f8}] [] [{kube-controller-manager Update v1 2021-05-24 19:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f296a8e-4e9d-434d-beeb-cab1872c9ddf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 19:00:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 19:00:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qvl8n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qvl8n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qvl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:00:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.221,StartTime:2021-05-24 19:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 19:00:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://a34c1d729347a8932adac7cfa7eec0daea86dddaa1df30ebb15995e356bcf32f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:11.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3958" for this suite. • [SLOW TEST:11.098 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:03.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 24 19:00:03.277: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:14.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7148" for this suite. • [SLOW TEST:10.993 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":158,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:02.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod May 24 19:00:12.679: INFO: Successfully updated pod "labelsupdatea512463a-dd95-4ab0-aafe-e6b31b2e49ab" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:14.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3205" for this suite. • [SLOW TEST:12.599 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:04.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:15.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9626" for this suite. • [SLOW TEST:11.090 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":5,"skipped":94,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:14.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars May 24 19:00:14.796: INFO: Waiting up to 5m0s for pod "downward-api-24c83191-85d1-424f-8911-12b0d53378c9" in namespace "downward-api-7883" to be "Succeeded or Failed" May 24 19:00:14.798: INFO: Pod "downward-api-24c83191-85d1-424f-8911-12b0d53378c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160067ms May 24 19:00:16.801: INFO: Pod "downward-api-24c83191-85d1-424f-8911-12b0d53378c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005285092s STEP: Saw pod success May 24 19:00:16.801: INFO: Pod "downward-api-24c83191-85d1-424f-8911-12b0d53378c9" satisfied condition "Succeeded or Failed" May 24 19:00:16.804: INFO: Trying to get logs from node leguer-worker2 pod downward-api-24c83191-85d1-424f-8911-12b0d53378c9 container dapi-container: STEP: delete the pod May 24 19:00:16.817: INFO: Waiting for pod downward-api-24c83191-85d1-424f-8911-12b0d53378c9 to disappear May 24 19:00:16.819: INFO: Pod downward-api-24c83191-85d1-424f-8911-12b0d53378c9 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7883" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:09.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating secret secrets-413/secret-test-f56a998f-0ffb-4a74-b03b-88e7ad68499b STEP: Creating a pod to test consume secrets May 24 19:00:09.398: INFO: Waiting up to 5m0s for pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329" in namespace "secrets-413" to be "Succeeded or Failed" May 24 19:00:09.401: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960208ms May 24 19:00:11.424: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026579408s May 24 19:00:13.428: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030300814s May 24 19:00:15.432: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034192406s May 24 19:00:17.435: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037141479s STEP: Saw pod success May 24 19:00:17.435: INFO: Pod "pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329" satisfied condition "Succeeded or Failed" May 24 19:00:17.437: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329 container env-test: STEP: delete the pod May 24 19:00:17.449: INFO: Waiting for pod pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329 to disappear May 24 19:00:17.450: INFO: Pod pod-configmaps-fbcf5a5c-52a6-488c-ba94-d3bc9f763329 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:17.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-413" for this suite. • [SLOW TEST:8.102 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:11.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating replication controller my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a May 24 19:00:11.330: INFO: Pod name my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a: Found 0 pods out of 1 May 24 19:00:16.336: INFO: Pod name my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a: Found 1 pods out of 1 May 24 19:00:16.336: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a" are running May 24 19:00:16.339: INFO: Pod "my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a-hzh59" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:00:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:00:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:00:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:00:11 +0000 UTC Reason: Message:}]) May 24 19:00:16.339: INFO: Trying to dial the pod May 24 19:00:21.349: INFO: Controller my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a: Got expected result from replica 1 [my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a-hzh59]: "my-hostname-basic-08e0f24e-9881-490e-811f-e8061953666a-hzh59", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:21.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6770" for this suite. • [SLOW TEST:10.066 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:17.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller May 24 19:00:17.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 create -f -' May 24 19:00:17.871: INFO: stderr: "" May 24 19:00:17.871: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 19:00:17.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:00:18.040: INFO: stderr: "" May 24 19:00:18.040: INFO: stdout: "update-demo-nautilus-bqdkg update-demo-nautilus-hwfxv " May 24 19:00:18.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods update-demo-nautilus-bqdkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:00:18.153: INFO: stderr: "" May 24 19:00:18.153: INFO: stdout: "" May 24 19:00:18.153: INFO: update-demo-nautilus-bqdkg is created but not running May 24 19:00:23.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:00:23.280: INFO: stderr: "" May 24 19:00:23.280: INFO: stdout: "update-demo-nautilus-bqdkg update-demo-nautilus-hwfxv " May 24 19:00:23.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods update-demo-nautilus-bqdkg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:00:23.397: INFO: stderr: "" May 24 19:00:23.397: INFO: stdout: "true" May 24 19:00:23.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods update-demo-nautilus-bqdkg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:00:23.511: INFO: stderr: "" May 24 19:00:23.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:00:23.511: INFO: validating pod update-demo-nautilus-bqdkg May 24 19:00:23.518: INFO: got data: { "image": "nautilus.jpg" } May 24 19:00:23.518: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:00:23.518: INFO: update-demo-nautilus-bqdkg is verified up and running May 24 19:00:23.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods update-demo-nautilus-hwfxv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:00:23.631: INFO: stderr: "" May 24 19:00:23.631: INFO: stdout: "true" May 24 19:00:23.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods update-demo-nautilus-hwfxv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:00:23.747: INFO: stderr: "" May 24 19:00:23.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:00:23.747: INFO: validating pod update-demo-nautilus-hwfxv May 24 19:00:23.751: INFO: got data: { "image": "nautilus.jpg" } May 24 19:00:23.751: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:00:23.751: INFO: update-demo-nautilus-hwfxv is verified up and running STEP: using delete to clean up resources May 24 19:00:23.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 delete --grace-period=0 --force -f -' May 24 19:00:23.871: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:00:23.871: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 19:00:23.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get rc,svc -l name=update-demo --no-headers' May 24 19:00:23.992: INFO: stderr: "No resources found in kubectl-2712 namespace.\n" May 24 19:00:23.992: INFO: stdout: "" May 24 19:00:23.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2712 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 19:00:24.119: INFO: stderr: "" May 24 19:00:24.119: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:24.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2712" for this suite. • [SLOW TEST:6.633 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":5,"skipped":132,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:24.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:24.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4599" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":6,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:15.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service nodeport-test with type=NodePort in namespace services-1758 STEP: creating replication controller nodeport-test in namespace services-1758 I0524 19:00:16.136898 27 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1758, replica count: 2 I0524 19:00:19.187434 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:00:22.187750 27 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:00:22.187: INFO: Creating new exec pod May 24 19:00:25.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1758 exec execpodlv662 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 24 19:00:25.434: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 24 19:00:25.434: INFO: stdout: "" May 24 19:00:25.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1758 exec execpodlv662 -- /bin/sh -x -c nc -zv -t -w 2 10.96.245.118 80' May 24 19:00:25.669: INFO: stderr: "+ nc -zv -t -w 2 10.96.245.118 80\nConnection to 10.96.245.118 80 port [tcp/http] succeeded!\n" May 24 19:00:25.669: INFO: stdout: "" May 24 19:00:25.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1758 exec execpodlv662 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 30354' May 24 19:00:25.934: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 30354\nConnection to 172.18.0.7 30354 port [tcp/30354] succeeded!\n" May 24 19:00:25.934: INFO: stdout: "" May 24 19:00:25.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1758 exec execpodlv662 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 30354' May 24 19:00:26.165: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 30354\nConnection to 172.18.0.5 30354 port [tcp/30354] succeeded!\n" May 24 19:00:26.166: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:26.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1758" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:10.422 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":6,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:58.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4019 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4019 STEP: creating replication controller externalsvc in namespace services-4019 I0524 18:59:58.443785 24 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4019, replica count: 2 I0524 19:00:01.494235 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:00:04.494563 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:00:07.494990 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:00:10.495312 24 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 24 19:00:10.510: INFO: Creating new exec pod May 24 19:00:16.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4019 exec execpodcs7f5 -- /bin/sh -x -c nslookup clusterip-service.services-4019.svc.cluster.local' May 24 19:00:16.771: INFO: stderr: "+ nslookup clusterip-service.services-4019.svc.cluster.local\n" May 24 19:00:16.771: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4019.svc.cluster.local\tcanonical name = externalsvc.services-4019.svc.cluster.local.\nName:\texternalsvc.services-4019.svc.cluster.local\nAddress: 10.96.178.219\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4019, will wait for the garbage collector to delete the pods May 24 19:00:16.832: INFO: Deleting ReplicationController externalsvc took: 7.290631ms May 24 19:00:17.533: INFO: Terminating ReplicationController externalsvc pods took: 700.285597ms May 24 19:00:27.943: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:27.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4019" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:29.556 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":8,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:26.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:00:26.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426" in namespace "downward-api-6297" to be "Succeeded or Failed" May 24 19:00:26.341: INFO: Pod "downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503027ms May 24 19:00:28.346: INFO: Pod "downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007253221s STEP: Saw pod success May 24 19:00:28.346: INFO: Pod "downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426" satisfied condition "Succeeded or Failed" May 24 19:00:28.349: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426 container client-container: STEP: delete the pod May 24 19:00:28.362: INFO: Waiting for pod downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426 to disappear May 24 19:00:28.365: INFO: Pod downwardapi-volume-a23411b1-0d73-43f4-b86b-81cf8bf5e426 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:28.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6297" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":184,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:14.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:14.277: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 24 19:00:19.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 create -f -' May 24 19:00:20.170: INFO: stderr: "" May 24 19:00:20.170: INFO: stdout: "e2e-test-crd-publish-openapi-2808-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 19:00:20.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 delete e2e-test-crd-publish-openapi-2808-crds test-foo' May 24 19:00:20.284: INFO: stderr: "" May 24 19:00:20.284: INFO: stdout: "e2e-test-crd-publish-openapi-2808-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 24 19:00:20.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 apply -f -' May 24 19:00:20.545: INFO: stderr: "" May 24 19:00:20.545: INFO: stdout: "e2e-test-crd-publish-openapi-2808-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 19:00:20.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 delete e2e-test-crd-publish-openapi-2808-crds test-foo' May 24 19:00:20.655: INFO: stderr: "" May 24 19:00:20.656: INFO: stdout: "e2e-test-crd-publish-openapi-2808-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 24 19:00:20.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 create -f -' May 24 19:00:20.918: INFO: rc: 1 May 24 19:00:20.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 apply -f -' May 24 19:00:21.249: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 24 19:00:21.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 create -f -' May 24 19:00:21.493: INFO: rc: 1 May 24 19:00:21.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 --namespace=crd-publish-openapi-623 apply -f -' May 24 19:00:21.751: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 24 19:00:21.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 explain e2e-test-crd-publish-openapi-2808-crds' May 24 19:00:21.999: INFO: stderr: "" May 24 19:00:21.999: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2808-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 24 19:00:21.999: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 explain e2e-test-crd-publish-openapi-2808-crds.metadata' May 24 19:00:22.255: INFO: stderr: "" May 24 19:00:22.255: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2808-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 24 19:00:22.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 explain e2e-test-crd-publish-openapi-2808-crds.spec' May 24 19:00:22.518: INFO: stderr: "" May 24 19:00:22.518: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2808-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 24 19:00:22.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 explain e2e-test-crd-publish-openapi-2808-crds.spec.bars' May 24 19:00:22.775: INFO: stderr: "" May 24 19:00:22.776: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2808-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 24 19:00:22.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-623 explain e2e-test-crd-publish-openapi-2808-crds.spec.bars2' May 24 19:00:23.039: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:28.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-623" for this suite. • [SLOW TEST:14.173 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":10,"skipped":166,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:07.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-secret-6tx7 STEP: Creating a pod to test atomic-volume-subpath May 24 19:00:07.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-6tx7" in namespace "subpath-9076" to be "Succeeded or Failed" May 24 19:00:07.132: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.571431ms May 24 19:00:09.135: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006049722s May 24 19:00:11.140: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010234891s May 24 19:00:13.143: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 6.014075355s May 24 19:00:15.147: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 8.017949286s May 24 19:00:17.151: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 10.021517706s May 24 19:00:19.227: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 12.097233042s May 24 19:00:21.230: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 14.101039671s May 24 19:00:23.234: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 16.104580722s May 24 19:00:25.237: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 18.107677451s May 24 19:00:27.241: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 20.111762519s May 24 19:00:29.244: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Running", Reason="", readiness=true. Elapsed: 22.114451932s May 24 19:00:31.247: INFO: Pod "pod-subpath-test-secret-6tx7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.117878167s STEP: Saw pod success May 24 19:00:31.247: INFO: Pod "pod-subpath-test-secret-6tx7" satisfied condition "Succeeded or Failed" May 24 19:00:31.250: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-secret-6tx7 container test-container-subpath-secret-6tx7: STEP: delete the pod May 24 19:00:31.264: INFO: Waiting for pod pod-subpath-test-secret-6tx7 to disappear May 24 19:00:31.266: INFO: Pod pod-subpath-test-secret-6tx7 no longer exists STEP: Deleting pod pod-subpath-test-secret-6tx7 May 24 19:00:31.266: INFO: Deleting pod "pod-subpath-test-secret-6tx7" in namespace "subpath-9076" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:31.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9076" for this suite. • [SLOW TEST:24.186 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:27.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 19:00:28.025: INFO: Waiting up to 5m0s for pod "pod-99a761e3-d1b1-440c-9745-721dbe5ea072" in namespace "emptydir-399" to be "Succeeded or Failed" May 24 19:00:28.027: INFO: Pod "pod-99a761e3-d1b1-440c-9745-721dbe5ea072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.355357ms May 24 19:00:30.032: INFO: Pod "pod-99a761e3-d1b1-440c-9745-721dbe5ea072": Phase="Running", Reason="", readiness=true. Elapsed: 2.006619738s May 24 19:00:32.036: INFO: Pod "pod-99a761e3-d1b1-440c-9745-721dbe5ea072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010807937s STEP: Saw pod success May 24 19:00:32.036: INFO: Pod "pod-99a761e3-d1b1-440c-9745-721dbe5ea072" satisfied condition "Succeeded or Failed" May 24 19:00:32.039: INFO: Trying to get logs from node leguer-worker pod pod-99a761e3-d1b1-440c-9745-721dbe5ea072 container test-container: STEP: delete the pod May 24 19:00:32.055: INFO: Waiting for pod pod-99a761e3-d1b1-440c-9745-721dbe5ea072 to disappear May 24 19:00:32.058: INFO: Pod pod-99a761e3-d1b1-440c-9745-721dbe5ea072 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:32.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-399" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:28.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6613.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6613.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6613.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6613.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6613.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6613.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:00:32.526: INFO: DNS probes using dns-6613/dns-test-39ac9e05-e283-4ba3-b2a6-64247200c265 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:32.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6613" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":175,"failed":0} S ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:11.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 24 19:00:11.638: INFO: >>> kubeConfig: /root/.kube/config May 24 19:00:15.552: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:32.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-594" for this suite. • [SLOW TEST:20.972 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":5,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:28.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:00:29.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:00:32.338: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:32.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4131-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3911" for this suite. STEP: Destroying namespace "webhook-3911-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.301 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:31.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC May 24 19:00:31.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6299 create -f -' May 24 19:00:31.703: INFO: stderr: "" May 24 19:00:31.703: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 24 19:00:32.707: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:00:32.707: INFO: Found 0 / 1 May 24 19:00:33.706: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:00:33.706: INFO: Found 0 / 1 May 24 19:00:34.707: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:00:34.707: INFO: Found 1 / 1 May 24 19:00:34.707: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 24 19:00:34.711: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:00:34.711: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 19:00:34.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6299 patch pod agnhost-primary-p64nd -p {"metadata":{"annotations":{"x":"y"}}}' May 24 19:00:34.840: INFO: stderr: "" May 24 19:00:34.841: INFO: stdout: "pod/agnhost-primary-p64nd patched\n" STEP: checking annotations May 24 19:00:34.844: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:00:34.844: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:34.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6299" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":5,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-0d8c14d6-1d45-4019-8177-54563e579da2 STEP: Creating a pod to test consume secrets May 24 19:00:32.178: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928" in namespace "projected-2420" to be "Succeeded or Failed" May 24 19:00:32.181: INFO: Pod "pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.848167ms May 24 19:00:34.222: INFO: Pod "pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043618825s May 24 19:00:36.225: INFO: Pod "pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047439196s STEP: Saw pod success May 24 19:00:36.226: INFO: Pod "pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928" satisfied condition "Succeeded or Failed" May 24 19:00:36.228: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928 container projected-secret-volume-test: STEP: delete the pod May 24 19:00:36.242: INFO: Waiting for pod pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928 to disappear May 24 19:00:36.244: INFO: Pod pod-projected-secrets-5e1b26d9-41f1-4e32-94ef-7fc8f15d6928 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:36.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2420" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":186,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:36.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:36.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4315" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":11,"skipped":188,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:32.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service multi-endpoint-test in namespace services-1384 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1384 to expose endpoints map[] May 24 19:00:32.577: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found May 24 19:00:33.583: INFO: successfully validated that service multi-endpoint-test in namespace services-1384 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1384 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1384 to expose endpoints map[pod1:[100]] May 24 19:00:36.604: INFO: successfully validated that service multi-endpoint-test in namespace services-1384 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-1384 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1384 to expose endpoints map[pod1:[100] pod2:[101]] May 24 19:00:40.626: INFO: successfully validated that service multi-endpoint-test in namespace services-1384 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-1384 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1384 to expose endpoints map[pod2:[101]] May 24 19:00:40.649: INFO: successfully validated that service multi-endpoint-test in namespace services-1384 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-1384 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1384 to expose endpoints map[] May 24 19:00:40.664: INFO: successfully validated that service multi-endpoint-test in namespace services-1384 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1384" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:8.147 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":12,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:32.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:43.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1000" for this suite. • [SLOW TEST:11.074 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:43.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Starting the proxy May 24 19:00:43.821: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-922 proxy --unix-socket=/tmp/kubectl-proxy-unix685547313/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:43.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-922" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":7,"skipped":137,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:43.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 24 19:00:43.986: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 24 19:00:43.991: INFO: starting watch STEP: patching STEP: updating May 24 19:00:44.005: INFO: waiting for watch events with expected annotations May 24 19:00:44.005: INFO: missing expected annotations, waiting: map[string]string(nil) May 24 19:00:44.005: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:44.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-3670" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":141,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:40.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-e964cd5f-38d1-4625-be1c-5c5065bdcd5c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:44.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4456" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":196,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:36.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 19:00:36.897: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 19:00:38.908: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:00:40.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479636, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:00:43.920: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:43.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:45.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2871" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.794 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":12,"skipped":191,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:45.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:00:45.132: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:47.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5127" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":192,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 18:59:45.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4695 STEP: creating service affinity-clusterip-transition in namespace services-4695 STEP: creating replication controller affinity-clusterip-transition in namespace services-4695 I0524 18:59:45.994067 18 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-4695, replica count: 3 I0524 18:59:49.044587 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 18:59:52.044913 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 18:59:52.050: INFO: Creating new exec pod May 24 19:00:01.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4695 exec execpod-affinityk9m77 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 24 19:00:01.267: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 24 19:00:01.267: INFO: stdout: "" May 24 19:00:01.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4695 exec execpod-affinityk9m77 -- /bin/sh -x -c nc -zv -t -w 2 10.96.77.19 80' May 24 19:00:01.486: INFO: stderr: "+ nc -zv -t -w 2 10.96.77.19 80\nConnection to 10.96.77.19 80 port [tcp/http] succeeded!\n" May 24 19:00:01.486: INFO: stdout: "" May 24 19:00:01.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4695 exec execpod-affinityk9m77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.77.19:80/ ; done' May 24 19:00:01.870: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n" May 24 19:00:01.870: INFO: stdout: "\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-k8v9w" May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:01.870: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:01.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4695 exec execpod-affinityk9m77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.77.19:80/ ; done' May 24 19:00:02.207: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n" May 24 19:00:02.207: INFO: stdout: "\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-k8v9w\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-49p2c\naffinity-clusterip-transition-pj5m7" May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-k8v9w May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-49p2c May 24 19:00:02.207: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-4695 exec execpod-affinityk9m77 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.77.19:80/ ; done' May 24 19:00:32.618: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.77.19:80/\n" May 24 19:00:32.618: INFO: stdout: "\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7\naffinity-clusterip-transition-pj5m7" May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Received response from host: affinity-clusterip-transition-pj5m7 May 24 19:00:32.618: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-4695, will wait for the garbage collector to delete the pods May 24 19:00:32.684: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.012471ms May 24 19:00:32.784: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.322114ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:47.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4695" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:62.056 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":93,"failed":0} S ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":8,"skipped":187,"failed":0} [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:33.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:49.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6073" for this suite. • [SLOW TEST:16.112 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":9,"skipped":187,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:44.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod May 24 19:00:48.670: INFO: Successfully updated pod "annotationupdate0af0392d-f3a4-4a87-b9ef-0840f95d8e9c" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:50.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6264" for this suite. • [SLOW TEST:6.595 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":157,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:48.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:51.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2360" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":4,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:49.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:00:49.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300" in namespace "downward-api-8207" to be "Succeeded or Failed" May 24 19:00:49.885: INFO: Pod "downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257047ms May 24 19:00:51.889: INFO: Pod "downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006165681s May 24 19:00:53.893: INFO: Pod "downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009993592s STEP: Saw pod success May 24 19:00:53.893: INFO: Pod "downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300" satisfied condition "Succeeded or Failed" May 24 19:00:53.896: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300 container client-container: STEP: delete the pod May 24 19:00:53.909: INFO: Waiting for pod downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300 to disappear May 24 19:00:53.912: INFO: Pod downwardapi-volume-49b9bb92-1760-4198-b580-f0fea088d300 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:53.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8207" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":207,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:16.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:00:20.984: INFO: DNS probes using dns-test-bcf14d4b-268f-45e4-a781-3bacdd3e72fe succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:00:23.019: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains '' instead of 'bar.example.com.' May 24 19:00:23.023: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:23.023: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:28.028: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:28.031: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:28.031: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:33.028: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:33.032: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:33.032: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:38.028: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:38.033: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:38.033: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:43.029: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:43.033: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:43.033: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:48.031: INFO: File wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:48.035: INFO: File jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local from pod dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 19:00:48.035: INFO: Lookups using dns-4139/dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 failed for: [wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local] May 24 19:00:53.030: INFO: DNS probes using dns-test-164e31ea-5c07-4d6a-bce0-5243549e6f00 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4139.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4139.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:00:55.076: INFO: DNS probes using dns-test-a183c86b-c02d-4f8d-a979-22de577a87d3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:55.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4139" for this suite. • [SLOW TEST:38.175 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":9,"skipped":206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:55.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:00:55.208: INFO: Waiting up to 5m0s for pod "downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1" in namespace "projected-4886" to be "Succeeded or Failed" May 24 19:00:55.211: INFO: Pod "downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977331ms May 24 19:00:57.323: INFO: Pod "downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.115684593s STEP: Saw pod success May 24 19:00:57.323: INFO: Pod "downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1" satisfied condition "Succeeded or Failed" May 24 19:00:57.326: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1 container client-container: STEP: delete the pod May 24 19:00:57.527: INFO: Waiting for pod downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1 to disappear May 24 19:00:57.530: INFO: Pod downwardapi-volume-387be033-7aef-4be6-b490-b36090568ca1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4886" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:08.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1110 May 24 19:00:15.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 24 19:00:15.250: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 24 19:00:15.250: INFO: stdout: "iptables" May 24 19:00:15.250: INFO: proxyMode: iptables May 24 19:00:15.258: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 19:00:15.261: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1110 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1110 I0524 19:00:15.278364 20 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1110, replica count: 3 I0524 19:00:18.328874 20 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:00:21.329158 20 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:00:21.339: INFO: Creating new exec pod May 24 19:00:24.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 24 19:00:24.585: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 24 19:00:24.585: INFO: stdout: "" May 24 19:00:24.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c nc -zv -t -w 2 10.96.153.117 80' May 24 19:00:24.820: INFO: stderr: "+ nc -zv -t -w 2 10.96.153.117 80\nConnection to 10.96.153.117 80 port [tcp/http] succeeded!\n" May 24 19:00:24.820: INFO: stdout: "" May 24 19:00:24.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 30206' May 24 19:00:25.008: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 30206\nConnection to 172.18.0.7 30206 port [tcp/30206] succeeded!\n" May 24 19:00:25.008: INFO: stdout: "" May 24 19:00:25.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 30206' May 24 19:00:25.197: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 30206\nConnection to 172.18.0.5 30206 port [tcp/30206] succeeded!\n" May 24 19:00:25.198: INFO: stdout: "" May 24 19:00:25.198: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30206/ ; done' May 24 19:00:25.541: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n" May 24 19:00:25.541: INFO: stdout: "\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt\naffinity-nodeport-timeout-f8wqt" May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Received response from host: affinity-nodeport-timeout-f8wqt May 24 19:00:25.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:30206/' May 24 19:00:25.813: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n" May 24 19:00:25.813: INFO: stdout: "affinity-nodeport-timeout-f8wqt" May 24 19:00:45.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1110 exec execpod-affinityr98t7 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:30206/' May 24 19:00:46.036: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:30206/\n" May 24 19:00:46.037: INFO: stdout: "affinity-nodeport-timeout-g9cc7" May 24 19:00:46.037: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1110, will wait for the garbage collector to delete the pods May 24 19:00:46.106: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.535238ms May 24 19:00:46.206: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.313205ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:58.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1110" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:49.065 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":216,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-k5bz STEP: Creating a pod to test atomic-volume-subpath May 24 19:00:34.925: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k5bz" in namespace "subpath-471" to be "Succeeded or Failed" May 24 19:00:34.928: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389479ms May 24 19:00:36.931: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005803423s May 24 19:00:38.935: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 4.009509627s May 24 19:00:40.939: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 6.013399722s May 24 19:00:42.942: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 8.016375706s May 24 19:00:44.945: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 10.019459719s May 24 19:00:46.948: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 12.022897885s May 24 19:00:48.951: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 14.025967664s May 24 19:00:50.955: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 16.029984497s May 24 19:00:52.959: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 18.033686441s May 24 19:00:54.964: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 20.03873282s May 24 19:00:57.027: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Running", Reason="", readiness=true. Elapsed: 22.102168476s May 24 19:00:59.030: INFO: Pod "pod-subpath-test-configmap-k5bz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104839322s STEP: Saw pod success May 24 19:00:59.030: INFO: Pod "pod-subpath-test-configmap-k5bz" satisfied condition "Succeeded or Failed" May 24 19:00:59.033: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-k5bz container test-container-subpath-configmap-k5bz: STEP: delete the pod May 24 19:00:59.045: INFO: Waiting for pod pod-subpath-test-configmap-k5bz to disappear May 24 19:00:59.047: INFO: Pod pod-subpath-test-configmap-k5bz no longer exists STEP: Deleting pod pod-subpath-test-configmap-k5bz May 24 19:00:59.047: INFO: Deleting pod "pod-subpath-test-configmap-k5bz" in namespace "subpath-471" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:59.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-471" for this suite. • [SLOW TEST:24.171 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":140,"failed":0} SSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:57.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 19:00:59.664: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:00:59.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3451" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":298,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:58.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 19:00:58.096: INFO: Waiting up to 5m0s for pod "pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09" in namespace "emptydir-8717" to be "Succeeded or Failed" May 24 19:00:58.099: INFO: Pod "pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889535ms May 24 19:01:00.103: INFO: Pod "pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00692305s STEP: Saw pod success May 24 19:01:00.103: INFO: Pod "pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09" satisfied condition "Succeeded or Failed" May 24 19:01:00.106: INFO: Trying to get logs from node leguer-worker2 pod pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09 container test-container: STEP: delete the pod May 24 19:01:00.124: INFO: Waiting for pod pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09 to disappear May 24 19:01:00.127: INFO: Pod pod-f281e3f2-0fe1-4f0c-8853-39c61746ab09 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:00.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8717" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:59.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:00:59.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0" in namespace "downward-api-6019" to be "Succeeded or Failed" May 24 19:00:59.098: INFO: Pod "downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664182ms May 24 19:01:01.122: INFO: Pod "downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027269953s STEP: Saw pod success May 24 19:01:01.122: INFO: Pod "downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0" satisfied condition "Succeeded or Failed" May 24 19:01:01.125: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0 container client-container: STEP: delete the pod May 24 19:01:01.138: INFO: Waiting for pod downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0 to disappear May 24 19:01:01.141: INFO: Pod downwardapi-volume-632e8a0e-6957-45eb-98c2-2e7fdf9e74a0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:01.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6019" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":144,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:59.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:00:59.738: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff" in namespace "downward-api-3969" to be "Succeeded or Failed" May 24 19:00:59.741: INFO: Pod "downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.67266ms May 24 19:01:01.744: INFO: Pod "downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006325955s STEP: Saw pod success May 24 19:01:01.744: INFO: Pod "downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff" satisfied condition "Succeeded or Failed" May 24 19:01:01.749: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff container client-container: STEP: delete the pod May 24 19:01:01.763: INFO: Waiting for pod downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff to disappear May 24 19:01:01.766: INFO: Pod downwardapi-volume-b1b1389c-11d8-47e4-a0ca-96284615f7ff no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:01.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3969" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":307,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:01.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name projected-secret-test-df460d58-e346-4995-ba6b-391f7eed7eef STEP: Creating a pod to test consume secrets May 24 19:01:01.816: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e" in namespace "projected-3128" to be "Succeeded or Failed" May 24 19:01:01.819: INFO: Pod "pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.031616ms May 24 19:01:03.823: INFO: Pod "pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007321626s STEP: Saw pod success May 24 19:01:03.823: INFO: Pod "pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e" satisfied condition "Succeeded or Failed" May 24 19:01:03.826: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e container secret-volume-test: STEP: delete the pod May 24 19:01:03.841: INFO: Waiting for pod pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e to disappear May 24 19:01:03.846: INFO: Pod pod-projected-secrets-f07d6f8a-ce41-48ea-95c3-de609bf9124e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3128" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":309,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:00.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 24 19:01:03.232: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:04.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5198" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:01.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 19:01:03.715: INFO: Successfully updated pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4" May 24 19:01:03.715: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4" in namespace "pods-7684" to be "terminated due to deadline exceeded" May 24 19:01:03.718: INFO: Pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4": Phase="Running", Reason="", readiness=true. Elapsed: 2.996959ms May 24 19:01:05.721: INFO: Pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006764371s May 24 19:01:07.726: INFO: Pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.010876934s May 24 19:01:07.726: INFO: Pod "pod-update-activedeadlineseconds-53a04cf3-c558-41cc-9a87-b0812229e8a4" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:07.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7684" for this suite. • [SLOW TEST:6.579 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":151,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:47.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-downwardapi-5wv5 STEP: Creating a pod to test atomic-volume-subpath May 24 19:00:47.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5wv5" in namespace "subpath-1064" to be "Succeeded or Failed" May 24 19:00:47.333: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687333ms May 24 19:00:49.337: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 2.006365294s May 24 19:00:51.341: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 4.010079303s May 24 19:00:53.345: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 6.013985942s May 24 19:00:55.349: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 8.018171324s May 24 19:00:57.422: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 10.091690038s May 24 19:00:59.426: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 12.095540915s May 24 19:01:01.430: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 14.099813935s May 24 19:01:03.435: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 16.104054747s May 24 19:01:05.438: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 18.107301813s May 24 19:01:07.441: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 20.110773499s May 24 19:01:09.523: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Running", Reason="", readiness=true. Elapsed: 22.191909433s May 24 19:01:11.527: INFO: Pod "pod-subpath-test-downwardapi-5wv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.196435791s STEP: Saw pod success May 24 19:01:11.527: INFO: Pod "pod-subpath-test-downwardapi-5wv5" satisfied condition "Succeeded or Failed" May 24 19:01:11.530: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-downwardapi-5wv5 container test-container-subpath-downwardapi-5wv5: STEP: delete the pod May 24 19:01:11.546: INFO: Waiting for pod pod-subpath-test-downwardapi-5wv5 to disappear May 24 19:01:11.549: INFO: Pod pod-subpath-test-downwardapi-5wv5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-5wv5 May 24 19:01:11.549: INFO: Deleting pod "pod-subpath-test-downwardapi-5wv5" in namespace "subpath-1064" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1064" for this suite. • [SLOW TEST:24.276 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":197,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:51.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD May 24 19:00:51.164: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:13.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4861" for this suite. • [SLOW TEST:22.187 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":5,"skipped":113,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:11.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 24 19:01:14.144: INFO: Successfully updated pod "pod-update-e2527b6c-6073-4111-bcf7-3935f4fbf060" STEP: verifying the updated pod is in kubernetes May 24 19:01:14.150: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:14.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9700" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":205,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:14.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-6469b63e-1c33-4930-907d-339bc2eb49b3 STEP: Creating a pod to test consume secrets May 24 19:01:14.214: INFO: Waiting up to 5m0s for pod "pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4" in namespace "secrets-1522" to be "Succeeded or Failed" May 24 19:01:14.217: INFO: Pod "pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.93066ms May 24 19:01:16.221: INFO: Pod "pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006537681s STEP: Saw pod success May 24 19:01:16.221: INFO: Pod "pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4" satisfied condition "Succeeded or Failed" May 24 19:01:16.224: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4 container secret-volume-test: STEP: delete the pod May 24 19:01:16.240: INFO: Waiting for pod pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4 to disappear May 24 19:01:16.242: INFO: Pod pod-secrets-545cc83b-7e4a-4455-9f4a-6bc86dd320a4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:16.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1522" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":209,"failed":0} SSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:13.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:01:15.384: INFO: Deleting pod "var-expansion-8e8cf7b2-7661-466b-827a-6e48d81d6e82" in namespace "var-expansion-1847" May 24 19:01:15.390: INFO: Wait up to 5m0s for pod "var-expansion-8e8cf7b2-7661-466b-827a-6e48d81d6e82" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:17.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1847" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":6,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:16.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-903af1a2-2cc7-4beb-b5af-eecae175c5ea STEP: Creating a pod to test consume secrets May 24 19:01:16.309: INFO: Waiting up to 5m0s for pod "pod-secrets-150b5b09-0495-4990-96e9-60a21a511183" in namespace "secrets-7762" to be "Succeeded or Failed" May 24 19:01:16.312: INFO: Pod "pod-secrets-150b5b09-0495-4990-96e9-60a21a511183": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974861ms May 24 19:01:18.316: INFO: Pod "pod-secrets-150b5b09-0495-4990-96e9-60a21a511183": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007143795s STEP: Saw pod success May 24 19:01:18.316: INFO: Pod "pod-secrets-150b5b09-0495-4990-96e9-60a21a511183" satisfied condition "Succeeded or Failed" May 24 19:01:18.319: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-150b5b09-0495-4990-96e9-60a21a511183 container secret-volume-test: STEP: delete the pod May 24 19:01:18.335: INFO: Waiting for pod pod-secrets-150b5b09-0495-4990-96e9-60a21a511183 to disappear May 24 19:01:18.337: INFO: Pod pod-secrets-150b5b09-0495-4990-96e9-60a21a511183 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:18.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7762" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:44.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2863.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2863.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2863.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.133_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2863.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2863.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2863.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2863.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2863.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2863.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.202.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.202.133_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:00:48.924: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.932: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.935: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.957: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.964: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:48.987: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:00:53.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:53.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:53.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.000: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.022: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:54.051: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:00:58.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:58.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:58.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:58.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:59.019: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:59.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:59.025: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:59.028: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:00:59.046: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:01:03.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:03.996: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:03.999: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.002: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.020: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.028: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:04.046: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:01:09.022: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.026: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.128: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.132: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.335: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.339: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.342: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.345: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:09.362: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:01:14.024: INFO: Unable to read wheezy_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.028: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.032: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.035: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.059: INFO: Unable to read jessie_udp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.066: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.070: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local from pod dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb: the server could not find the requested resource (get pods dns-test-f32723b5-2ed9-44cf-9253-bde907539deb) May 24 19:01:14.095: INFO: Lookups using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb failed for: [wheezy_udp@dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@dns-test-service.dns-2863.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_udp@dns-test-service.dns-2863.svc.cluster.local jessie_tcp@dns-test-service.dns-2863.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2863.svc.cluster.local] May 24 19:01:19.067: INFO: DNS probes using dns-2863/dns-test-f32723b5-2ed9-44cf-9253-bde907539deb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:19.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2863" for this suite. • [SLOW TEST:34.243 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":14,"skipped":201,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:18.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's args May 24 19:01:18.466: INFO: Waiting up to 5m0s for pod "var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34" in namespace "var-expansion-652" to be "Succeeded or Failed" May 24 19:01:18.469: INFO: Pod "var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739796ms May 24 19:01:20.473: INFO: Pod "var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006891353s STEP: Saw pod success May 24 19:01:20.473: INFO: Pod "var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34" satisfied condition "Succeeded or Failed" May 24 19:01:20.476: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34 container dapi-container: STEP: delete the pod May 24 19:01:20.491: INFO: Waiting for pod var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34 to disappear May 24 19:01:20.494: INFO: Pod var-expansion-7cd906ee-45c4-4588-aa0e-6768f91a4a34 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:20.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-652" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:19.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 19:01:19.173: INFO: Waiting up to 5m0s for pod "pod-ccc68b17-571f-4c17-808f-f43de43527df" in namespace "emptydir-520" to be "Succeeded or Failed" May 24 19:01:19.175: INFO: Pod "pod-ccc68b17-571f-4c17-808f-f43de43527df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418067ms May 24 19:01:21.179: INFO: Pod "pod-ccc68b17-571f-4c17-808f-f43de43527df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006557871s STEP: Saw pod success May 24 19:01:21.179: INFO: Pod "pod-ccc68b17-571f-4c17-808f-f43de43527df" satisfied condition "Succeeded or Failed" May 24 19:01:21.182: INFO: Trying to get logs from node leguer-worker pod pod-ccc68b17-571f-4c17-808f-f43de43527df container test-container: STEP: delete the pod May 24 19:01:21.196: INFO: Waiting for pod pod-ccc68b17-571f-4c17-808f-f43de43527df to disappear May 24 19:01:21.200: INFO: Pod pod-ccc68b17-571f-4c17-808f-f43de43527df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:21.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-520" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:07.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-0cd108bd-988a-4eaf-b629-11c42494bdee in namespace container-probe-5972 May 24 19:01:09.789: INFO: Started pod liveness-0cd108bd-988a-4eaf-b629-11c42494bdee in namespace container-probe-5972 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:01:09.791: INFO: Initial restart count of pod liveness-0cd108bd-988a-4eaf-b629-11c42494bdee is 0 May 24 19:01:33.844: INFO: Restart count of pod container-probe-5972/liveness-0cd108bd-988a-4eaf-b629-11c42494bdee is now 1 (24.052812458s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:33.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5972" for this suite. • [SLOW TEST:26.118 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":152,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:33.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override command May 24 19:01:33.939: INFO: Waiting up to 5m0s for pod "client-containers-73b886d3-a183-4820-a1db-e732951500af" in namespace "containers-2773" to be "Succeeded or Failed" May 24 19:01:33.942: INFO: Pod "client-containers-73b886d3-a183-4820-a1db-e732951500af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.666206ms May 24 19:01:35.946: INFO: Pod "client-containers-73b886d3-a183-4820-a1db-e732951500af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006776451s STEP: Saw pod success May 24 19:01:35.946: INFO: Pod "client-containers-73b886d3-a183-4820-a1db-e732951500af" satisfied condition "Succeeded or Failed" May 24 19:01:35.949: INFO: Trying to get logs from node leguer-worker pod client-containers-73b886d3-a183-4820-a1db-e732951500af container agnhost-container: STEP: delete the pod May 24 19:01:35.965: INFO: Waiting for pod client-containers-73b886d3-a183-4820-a1db-e732951500af to disappear May 24 19:01:35.968: INFO: Pod client-containers-73b886d3-a183-4820-a1db-e732951500af no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2773" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":168,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:36.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events May 24 19:01:36.041: INFO: created test-event-1 May 24 19:01:36.045: INFO: created test-event-2 May 24 19:01:36.049: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 24 19:01:36.052: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 24 19:01:36.067: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:36.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8261" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":11,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:20.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 19:01:24.695: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:24.698: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:26.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:26.703: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:28.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:28.702: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:30.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:30.703: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:32.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:32.724: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:34.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:34.703: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:36.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:36.703: INFO: Pod pod-with-prestop-http-hook still exists May 24 19:01:38.698: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 19:01:38.702: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:38.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7472" for this suite. • [SLOW TEST:18.104 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:17.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:01:17.486: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 24 19:01:22.490: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 19:01:22.490: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 24 19:01:24.493: INFO: Creating deployment "test-rollover-deployment" May 24 19:01:24.501: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 24 19:01:26.508: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 24 19:01:26.515: INFO: Ensure that both replica sets have 1 created replica May 24 19:01:26.522: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 24 19:01:26.531: INFO: Updating deployment test-rollover-deployment May 24 19:01:26.531: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 24 19:01:28.627: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 24 19:01:28.635: INFO: Make sure deployment "test-rollover-deployment" is complete May 24 19:01:28.641: INFO: all replica sets need to contain the pod-template-hash label May 24 19:01:28.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:30.650: INFO: all replica sets need to contain the pod-template-hash label May 24 19:01:30.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:32.729: INFO: all replica sets need to contain the pod-template-hash label May 24 19:01:32.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:34.651: INFO: all replica sets need to contain the pod-template-hash label May 24 19:01:34.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:36.649: INFO: all replica sets need to contain the pod-template-hash label May 24 19:01:36.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:38.649: INFO: May 24 19:01:38.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479688, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479684, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:01:40.650: INFO: May 24 19:01:40.650: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 19:01:40.659: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-796 870bac88-b2a8-4a31-ba64-f34b12b02c30 819802 2 2021-05-24 19:01:24 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-05-24 19:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 19:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035fbe28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-05-24 19:01:24 +0000 UTC,LastTransitionTime:2021-05-24 19:01:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-05-24 19:01:38 +0000 UTC,LastTransitionTime:2021-05-24 19:01:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 24 19:01:40.663: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-796 86c52887-204f-4cfe-8479-11336ebca5fe 819791 2 2021-05-24 19:01:26 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 870bac88-b2a8-4a31-ba64-f34b12b02c30 0xc00371e387 0xc00371e388}] [] [{kube-controller-manager Update apps/v1 2021-05-24 19:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"870bac88-b2a8-4a31-ba64-f34b12b02c30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00371e428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 19:01:40.663: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 24 19:01:40.663: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-796 b5b567a5-9b92-49b1-8fac-bb5c7213d4b7 819800 2 2021-05-24 19:01:17 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 870bac88-b2a8-4a31-ba64-f34b12b02c30 0xc00371e247 0xc00371e248}] [] [{e2e.test Update apps/v1 2021-05-24 19:01:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 19:01:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"870bac88-b2a8-4a31-ba64-f34b12b02c30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00371e318 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 19:01:40.663: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-796 170e2ac7-7a41-46ad-bee0-8e33224bad1d 819651 2 2021-05-24 19:01:24 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 870bac88-b2a8-4a31-ba64-f34b12b02c30 0xc00371e497 0xc00371e498}] [] [{kube-controller-manager Update apps/v1 2021-05-24 19:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"870bac88-b2a8-4a31-ba64-f34b12b02c30\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00371e528 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 19:01:40.668: INFO: Pod "test-rollover-deployment-668db69979-k2xvr" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-k2xvr test-rollover-deployment-668db69979- deployment-796 d85c516a-2d11-4ec2-911d-f7cdd585360b 819674 0 2021-05-24 19:01:26 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.246" ], "mac": "be:62:b7:4b:a2:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.2.246" ], "mac": "be:62:b7:4b:a2:74", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 86c52887-204f-4cfe-8479-11336ebca5fe 0xc00371eaa7 0xc00371eaa8}] [] [{kube-controller-manager Update v1 2021-05-24 19:01:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86c52887-204f-4cfe-8479-11336ebca5fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 19:01:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 19:01:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-49hnc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-49hnc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-49hnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:01:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:01:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:01:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:01:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.246,StartTime:2021-05-24 19:01:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 19:01:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://8cfbf06f09a843a003bfbeace5a000e096e96c536eaf3ad5008e6cc721fe70aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:40.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-796" for this suite. • [SLOW TEST:23.231 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":7,"skipped":144,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:38.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override all May 24 19:01:38.815: INFO: Waiting up to 5m0s for pod "client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51" in namespace "containers-5110" to be "Succeeded or Failed" May 24 19:01:38.818: INFO: Pod "client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.778168ms May 24 19:01:40.822: INFO: Pod "client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007341333s STEP: Saw pod success May 24 19:01:40.822: INFO: Pod "client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51" satisfied condition "Succeeded or Failed" May 24 19:01:40.825: INFO: Trying to get logs from node leguer-worker pod client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51 container agnhost-container: STEP: delete the pod May 24 19:01:40.839: INFO: Waiting for pod client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51 to disappear May 24 19:01:40.842: INFO: Pod client-containers-2c8f8368-547e-470f-b5c1-f4cfdeef4e51 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:40.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5110" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:40.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:01:40.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8" in namespace "projected-6497" to be "Succeeded or Failed" May 24 19:01:40.744: INFO: Pod "downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099117ms May 24 19:01:42.748: INFO: Pod "downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006021654s STEP: Saw pod success May 24 19:01:42.748: INFO: Pod "downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8" satisfied condition "Succeeded or Failed" May 24 19:01:42.751: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8 container client-container: STEP: delete the pod May 24 19:01:42.767: INFO: Waiting for pod downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8 to disappear May 24 19:01:42.770: INFO: Pod downwardapi-volume-208b8e0c-c601-4512-a56d-7e55ba7926a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:42.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6497" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":146,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:42.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-b3df052c-7981-4c7b-8369-ca9d18b9a9a0 STEP: Creating a pod to test consume configMaps May 24 19:01:42.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a" in namespace "configmap-7777" to be "Succeeded or Failed" May 24 19:01:42.845: INFO: Pod "pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.675651ms May 24 19:01:44.849: INFO: Pod "pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006580742s STEP: Saw pod success May 24 19:01:44.849: INFO: Pod "pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a" satisfied condition "Succeeded or Failed" May 24 19:01:44.852: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a container agnhost-container: STEP: delete the pod May 24 19:01:44.872: INFO: Waiting for pod pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a to disappear May 24 19:01:44.875: INFO: Pod pod-configmaps-9cacc411-2619-4e74-b146-ee6bf436033a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:44.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7777" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":155,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:44.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token May 24 19:01:45.486: INFO: created pod pod-service-account-defaultsa May 24 19:01:45.486: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 24 19:01:45.490: INFO: created pod pod-service-account-mountsa May 24 19:01:45.490: INFO: pod pod-service-account-mountsa service account token volume mount: true May 24 19:01:45.495: INFO: created pod pod-service-account-nomountsa May 24 19:01:45.495: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 24 19:01:45.500: INFO: created pod pod-service-account-defaultsa-mountspec May 24 19:01:45.500: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 24 19:01:45.505: INFO: created pod pod-service-account-mountsa-mountspec May 24 19:01:45.505: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 24 19:01:45.508: INFO: created pod pod-service-account-nomountsa-mountspec May 24 19:01:45.508: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 24 19:01:45.512: INFO: created pod pod-service-account-defaultsa-nomountspec May 24 19:01:45.512: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 24 19:01:45.517: INFO: created pod pod-service-account-mountsa-nomountspec May 24 19:01:45.517: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 24 19:01:45.523: INFO: created pod pod-service-account-nomountsa-nomountspec May 24 19:01:45.523: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:45.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5995" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":10,"skipped":155,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:45.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:01:45.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71" in namespace "projected-2867" to be "Succeeded or Failed" May 24 19:01:45.590: INFO: Pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.718033ms May 24 19:01:47.595: INFO: Pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009245196s May 24 19:01:49.599: INFO: Pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013322775s May 24 19:01:51.603: INFO: Pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017913022s STEP: Saw pod success May 24 19:01:51.604: INFO: Pod "downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71" satisfied condition "Succeeded or Failed" May 24 19:01:51.607: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71 container client-container: STEP: delete the pod May 24 19:01:51.626: INFO: Waiting for pod downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71 to disappear May 24 19:01:51.628: INFO: Pod downwardapi-volume-48b0c711-9b1a-435e-905a-b7ecd6199c71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:51.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2867" for this suite. • [SLOW TEST:6.087 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":163,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:21.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b2d74a4f-18fd-4a6d-8187-c2691c52a15d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b2d74a4f-18fd-4a6d-8187-c2691c52a15d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:51.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9884" for this suite. • [SLOW TEST:90.413 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:51.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-cdb92656-eadf-415b-bbda-e15dc879d9c7 STEP: Creating a pod to test consume secrets May 24 19:01:51.864: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4" in namespace "projected-689" to be "Succeeded or Failed" May 24 19:01:51.867: INFO: Pod "pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841233ms May 24 19:01:53.870: INFO: Pod "pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005978543s STEP: Saw pod success May 24 19:01:53.870: INFO: Pod "pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4" satisfied condition "Succeeded or Failed" May 24 19:01:53.873: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4 container projected-secret-volume-test: STEP: delete the pod May 24 19:01:53.887: INFO: Waiting for pod pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4 to disappear May 24 19:01:53.890: INFO: Pod pod-projected-secrets-74345a1d-97ef-401f-9567-2828afd346f4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:53.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-689" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":79,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:53.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating Pod STEP: Reading file content from the nginx-container May 24 19:01:55.964: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7033 PodName:pod-sharedvolume-c51e129c-d9bb-4409-a5b5-21e057867cde ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:01:55.964: INFO: >>> kubeConfig: /root/.kube/config May 24 19:01:56.080: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:56.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7033" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":6,"skipped":91,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:36.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller May 24 19:01:36.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 create -f -' May 24 19:01:36.631: INFO: stderr: "" May 24 19:01:36.631: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 19:01:36.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:36.754: INFO: stderr: "" May 24 19:01:36.754: INFO: stdout: "update-demo-nautilus-7rqfp update-demo-nautilus-mr7p7 " May 24 19:01:36.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:36.866: INFO: stderr: "" May 24 19:01:36.866: INFO: stdout: "" May 24 19:01:36.866: INFO: update-demo-nautilus-7rqfp is created but not running May 24 19:01:41.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:41.994: INFO: stderr: "" May 24 19:01:41.994: INFO: stdout: "update-demo-nautilus-7rqfp update-demo-nautilus-mr7p7 " May 24 19:01:41.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:42.106: INFO: stderr: "" May 24 19:01:42.106: INFO: stdout: "true" May 24 19:01:42.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:01:42.222: INFO: stderr: "" May 24 19:01:42.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:01:42.222: INFO: validating pod update-demo-nautilus-7rqfp May 24 19:01:42.227: INFO: got data: { "image": "nautilus.jpg" } May 24 19:01:42.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:01:42.227: INFO: update-demo-nautilus-7rqfp is verified up and running May 24 19:01:42.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-mr7p7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:42.343: INFO: stderr: "" May 24 19:01:42.343: INFO: stdout: "true" May 24 19:01:42.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-mr7p7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:01:42.469: INFO: stderr: "" May 24 19:01:42.469: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:01:42.469: INFO: validating pod update-demo-nautilus-mr7p7 May 24 19:01:42.475: INFO: got data: { "image": "nautilus.jpg" } May 24 19:01:42.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:01:42.475: INFO: update-demo-nautilus-mr7p7 is verified up and running STEP: scaling down the replication controller May 24 19:01:42.479: INFO: scanned /root for discovery docs: May 24 19:01:42.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 24 19:01:43.622: INFO: stderr: "" May 24 19:01:43.622: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 19:01:43.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:43.752: INFO: stderr: "" May 24 19:01:43.753: INFO: stdout: "update-demo-nautilus-7rqfp update-demo-nautilus-mr7p7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 19:01:48.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:48.881: INFO: stderr: "" May 24 19:01:48.881: INFO: stdout: "update-demo-nautilus-7rqfp update-demo-nautilus-mr7p7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 19:01:53.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:53.989: INFO: stderr: "" May 24 19:01:53.989: INFO: stdout: "update-demo-nautilus-7rqfp " May 24 19:01:53.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:54.101: INFO: stderr: "" May 24 19:01:54.101: INFO: stdout: "true" May 24 19:01:54.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:01:54.211: INFO: stderr: "" May 24 19:01:54.211: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:01:54.211: INFO: validating pod update-demo-nautilus-7rqfp May 24 19:01:54.214: INFO: got data: { "image": "nautilus.jpg" } May 24 19:01:54.214: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:01:54.214: INFO: update-demo-nautilus-7rqfp is verified up and running STEP: scaling up the replication controller May 24 19:01:54.218: INFO: scanned /root for discovery docs: May 24 19:01:54.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 24 19:01:55.351: INFO: stderr: "" May 24 19:01:55.351: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 19:01:55.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 24 19:01:55.476: INFO: stderr: "" May 24 19:01:55.476: INFO: stdout: "update-demo-nautilus-7rqfp update-demo-nautilus-qh5nf " May 24 19:01:55.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:55.574: INFO: stderr: "" May 24 19:01:55.574: INFO: stdout: "true" May 24 19:01:55.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-7rqfp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:01:55.678: INFO: stderr: "" May 24 19:01:55.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:01:55.678: INFO: validating pod update-demo-nautilus-7rqfp May 24 19:01:55.682: INFO: got data: { "image": "nautilus.jpg" } May 24 19:01:55.682: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:01:55.682: INFO: update-demo-nautilus-7rqfp is verified up and running May 24 19:01:55.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-qh5nf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 24 19:01:55.792: INFO: stderr: "" May 24 19:01:55.792: INFO: stdout: "true" May 24 19:01:55.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods update-demo-nautilus-qh5nf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 24 19:01:55.907: INFO: stderr: "" May 24 19:01:55.907: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 19:01:55.907: INFO: validating pod update-demo-nautilus-qh5nf May 24 19:01:55.915: INFO: got data: { "image": "nautilus.jpg" } May 24 19:01:55.915: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 19:01:55.915: INFO: update-demo-nautilus-qh5nf is verified up and running STEP: using delete to clean up resources May 24 19:01:55.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 delete --grace-period=0 --force -f -' May 24 19:01:56.028: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:01:56.028: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 19:01:56.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get rc,svc -l name=update-demo --no-headers' May 24 19:01:56.177: INFO: stderr: "No resources found in kubectl-5526 namespace.\n" May 24 19:01:56.177: INFO: stdout: "" May 24 19:01:56.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5526 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 19:01:56.341: INFO: stderr: "" May 24 19:01:56.341: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:56.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5526" for this suite. • [SLOW TEST:20.141 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":12,"skipped":254,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:53.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0524 19:00:54.999630 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:01:57.042: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:57.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1531" for this suite. • [SLOW TEST:63.121 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":11,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:24.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 19:00:24.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2720 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 24 19:00:24.402: INFO: stderr: "" May 24 19:00:24.403: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 24 19:00:24.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2720 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' May 24 19:00:24.803: INFO: stderr: "" May 24 19:00:24.803: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine May 24 19:00:24.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2720 delete pods e2e-test-httpd-pod' May 24 19:01:58.114: INFO: stderr: "" May 24 19:01:58.114: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:58.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2720" for this suite. • [SLOW TEST:93.883 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":7,"skipped":175,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:56.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:01:56.154: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 24 19:01:58.182: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:01:59.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6042" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":7,"skipped":114,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:56.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 19:01:56.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-1161 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' May 24 19:01:56.525: INFO: stderr: "" May 24 19:01:56.525: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 May 24 19:01:56.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-1161 delete pods e2e-test-httpd-pod' May 24 19:02:02.161: INFO: stderr: "" May 24 19:02:02.161: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:02.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1161" for this suite. • [SLOW TEST:5.809 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":13,"skipped":259,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:59.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on node default medium May 24 19:01:59.238: INFO: Waiting up to 5m0s for pod "pod-39f46e75-dbfc-4c56-93fa-2e542a501352" in namespace "emptydir-3000" to be "Succeeded or Failed" May 24 19:01:59.241: INFO: Pod "pod-39f46e75-dbfc-4c56-93fa-2e542a501352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236309ms May 24 19:02:01.244: INFO: Pod "pod-39f46e75-dbfc-4c56-93fa-2e542a501352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005069866s May 24 19:02:03.247: INFO: Pod "pod-39f46e75-dbfc-4c56-93fa-2e542a501352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008232789s STEP: Saw pod success May 24 19:02:03.247: INFO: Pod "pod-39f46e75-dbfc-4c56-93fa-2e542a501352" satisfied condition "Succeeded or Failed" May 24 19:02:03.249: INFO: Trying to get logs from node leguer-worker2 pod pod-39f46e75-dbfc-4c56-93fa-2e542a501352 container test-container: STEP: delete the pod May 24 19:02:03.261: INFO: Waiting for pod pod-39f46e75-dbfc-4c56-93fa-2e542a501352 to disappear May 24 19:02:03.264: INFO: Pod pod-39f46e75-dbfc-4c56-93fa-2e542a501352 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:03.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3000" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":118,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:04.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 24 19:01:04.333: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819104 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:04.333: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819104 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 24 19:01:14.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819301 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:14.342: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819301 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 24 19:01:24.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819579 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:24.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819579 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 24 19:01:34.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819719 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:34.358: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2542 3e40bae1-c018-44f3-a776-8df0ef5a9dfe 819719 0 2021-05-24 19:01:04 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 24 19:01:44.366: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2542 3c91d2bc-954b-4cdc-88c2-7148f5849460 819982 0 2021-05-24 19:01:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:44.366: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2542 3c91d2bc-954b-4cdc-88c2-7148f5849460 819982 0 2021-05-24 19:01:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 24 19:01:54.372: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2542 3c91d2bc-954b-4cdc-88c2-7148f5849460 820266 0 2021-05-24 19:01:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:01:54.372: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2542 3c91d2bc-954b-4cdc-88c2-7148f5849460 820266 0 2021-05-24 19:01:44 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-05-24 19:01:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:04.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2542" for this suite. • [SLOW TEST:60.085 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":10,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:02.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test service account token: May 24 19:02:02.241: INFO: Waiting up to 5m0s for pod "test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2" in namespace "svcaccounts-9770" to be "Succeeded or Failed" May 24 19:02:02.243: INFO: Pod "test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493044ms May 24 19:02:04.246: INFO: Pod "test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00533206s May 24 19:02:06.250: INFO: Pod "test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00873918s STEP: Saw pod success May 24 19:02:06.250: INFO: Pod "test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2" satisfied condition "Succeeded or Failed" May 24 19:02:06.253: INFO: Trying to get logs from node leguer-worker2 pod test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2 container agnhost-container: STEP: delete the pod May 24 19:02:06.266: INFO: Waiting for pod test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2 to disappear May 24 19:02:06.269: INFO: Pod test-pod-aac37e13-fc62-407f-a8be-0e4d32f06ba2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:06.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9770" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":14,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:58.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 24 19:02:06.197: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.197: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.311: INFO: Exec stderr: "" May 24 19:02:06.311: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.311: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.393: INFO: Exec stderr: "" May 24 19:02:06.393: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.393: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.500: INFO: Exec stderr: "" May 24 19:02:06.500: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.500: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.592: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 24 19:02:06.592: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.592: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.710: INFO: Exec stderr: "" May 24 19:02:06.710: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.710: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.821: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 24 19:02:06.821: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.821: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:06.933: INFO: Exec stderr: "" May 24 19:02:06.933: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:06.933: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:07.018: INFO: Exec stderr: "" May 24 19:02:07.018: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:07.018: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:07.131: INFO: Exec stderr: "" May 24 19:02:07.131: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8347 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:07.131: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:07.236: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:07.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8347" for this suite. • [SLOW TEST:9.104 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:03.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 24 19:02:03.315: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6297 0c418c99-1a5e-4c56-95f0-3a99b9bc5950 820614 0 2021-05-24 19:02:03 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-05-24 19:02:03 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wbpd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wbpd9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wbpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 19:02:03.317: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 24 19:02:05.320: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 24 19:02:07.320: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 24 19:02:07.320: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6297 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:07.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 24 19:02:07.414: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6297 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:07.414: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:07.531: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:07.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6297" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":9,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:07.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:09.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4185" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":200,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:09.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name secret-emptykey-test-1d4de1dd-c9aa-4082-b4d1-9f36ab7a0ca6 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:09.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8046" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":10,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:04.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:04.465: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 24 19:02:09.468: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 19:02:09.468: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 May 24 19:02:09.483: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-1046 23bc911a-8865-42fb-86e1-c7b477a5da20 820838 1 2021-05-24 19:02:09 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-05-24 19:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00384b1d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 24 19:02:09.485: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 24 19:02:09.485: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 24 19:02:09.486: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-1046 90b2c61f-a2a8-4252-8091-07430b0d96a5 820839 1 2021-05-24 19:02:04 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 23bc911a-8865-42fb-86e1-c7b477a5da20 0xc00384b4d7 0xc00384b4d8}] [] [{e2e.test Update apps/v1 2021-05-24 19:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-24 19:02:09 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"23bc911a-8865-42fb-86e1-c7b477a5da20\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00384b578 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 19:02:09.488: INFO: Pod "test-cleanup-controller-sddgs" is available: &Pod{ObjectMeta:{test-cleanup-controller-sddgs test-cleanup-controller- deployment-1046 f9dce566-41dc-4e12-a855-94f40dd279e7 820737 0 2021-05-24 19:02:04 +0000 UTC map[name:cleanup-pod pod:httpd] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.99" ], "mac": "ae:97:81:92:00:05", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "", "interface": "eth0", "ips": [ "10.244.1.99" ], "mac": "ae:97:81:92:00:05", "default": true, "dns": {} }]] [{apps/v1 ReplicaSet test-cleanup-controller 90b2c61f-a2a8-4252-8091-07430b0d96a5 0xc00384b817 0xc00384b818}] [] [{kube-controller-manager Update v1 2021-05-24 19:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90b2c61f-a2a8-4252-8091-07430b0d96a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2021-05-24 19:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2021-05-24 19:02:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b8smh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b8smh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b8smh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:02:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:02:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:02:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-24 19:02:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.1.99,StartTime:2021-05-24 19:02:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-05-24 19:02:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1b1b8dcbe57f0683e9264bb284ade654264701da424d4825703df7661cfb44a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:09.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1046" for this suite. • [SLOW TEST:5.065 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":11,"skipped":305,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:07.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars May 24 19:02:07.662: INFO: Waiting up to 5m0s for pod "downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782" in namespace "downward-api-650" to be "Succeeded or Failed" May 24 19:02:07.664: INFO: Pod "downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773784ms May 24 19:02:09.668: INFO: Pod "downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005881253s STEP: Saw pod success May 24 19:02:09.668: INFO: Pod "downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782" satisfied condition "Succeeded or Failed" May 24 19:02:09.671: INFO: Trying to get logs from node leguer-worker2 pod downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782 container dapi-container: STEP: delete the pod May 24 19:02:09.687: INFO: Waiting for pod downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782 to disappear May 24 19:02:09.690: INFO: Pod downward-api-8a658efe-e302-4cd1-8a22-4816d8bc7782 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:09.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-650" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":164,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:09.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:09.527: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:10.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4066" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":12,"skipped":308,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:06.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service endpoint-test2 in namespace services-1397 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1397 to expose endpoints map[] May 24 19:02:06.354: INFO: successfully validated that service endpoint-test2 in namespace services-1397 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1397 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1397 to expose endpoints map[pod1:[80]] May 24 19:02:08.372: INFO: successfully validated that service endpoint-test2 in namespace services-1397 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-1397 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1397 to expose endpoints map[pod1:[80] pod2:[80]] May 24 19:02:10.390: INFO: successfully validated that service endpoint-test2 in namespace services-1397 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-1397 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1397 to expose endpoints map[pod2:[80]] May 24 19:02:10.406: INFO: successfully validated that service endpoint-test2 in namespace services-1397 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-1397 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1397 to expose endpoints map[] May 24 19:02:10.417: INFO: successfully validated that service endpoint-test2 in namespace services-1397 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:10.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1397" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":15,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:09.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars May 24 19:02:09.534: INFO: Waiting up to 5m0s for pod "downward-api-02f0fe15-7838-407f-ab64-009066204ab9" in namespace "downward-api-2471" to be "Succeeded or Failed" May 24 19:02:09.537: INFO: Pod "downward-api-02f0fe15-7838-407f-ab64-009066204ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049519ms May 24 19:02:11.540: INFO: Pod "downward-api-02f0fe15-7838-407f-ab64-009066204ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005675362s STEP: Saw pod success May 24 19:02:11.540: INFO: Pod "downward-api-02f0fe15-7838-407f-ab64-009066204ab9" satisfied condition "Succeeded or Failed" May 24 19:02:11.543: INFO: Trying to get logs from node leguer-worker2 pod downward-api-02f0fe15-7838-407f-ab64-009066204ab9 container dapi-container: STEP: delete the pod May 24 19:02:11.558: INFO: Waiting for pod downward-api-02f0fe15-7838-407f-ab64-009066204ab9 to disappear May 24 19:02:11.560: INFO: Pod downward-api-02f0fe15-7838-407f-ab64-009066204ab9 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:11.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2471" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":278,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:51.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 24 19:01:51.677: INFO: >>> kubeConfig: /root/.kube/config May 24 19:01:55.664: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:12.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8914" for this suite. • [SLOW TEST:20.632 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":12,"skipped":164,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:10.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:02:10.557: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231" in namespace "projected-4951" to be "Succeeded or Failed" May 24 19:02:10.560: INFO: Pod "downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719886ms May 24 19:02:12.563: INFO: Pod "downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00553335s STEP: Saw pod success May 24 19:02:12.563: INFO: Pod "downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231" satisfied condition "Succeeded or Failed" May 24 19:02:12.565: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231 container client-container: STEP: delete the pod May 24 19:02:12.577: INFO: Waiting for pod downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231 to disappear May 24 19:02:12.579: INFO: Pod downwardapi-volume-2188a72a-c9da-4934-a1e9-2910fd79e231 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:12.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4951" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":355,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:12.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 19:02:12.637: INFO: Waiting up to 5m0s for pod "pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca" in namespace "emptydir-2949" to be "Succeeded or Failed" May 24 19:02:12.639: INFO: Pod "pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237962ms May 24 19:02:14.643: INFO: Pod "pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005594469s May 24 19:02:16.646: INFO: Pod "pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008521015s STEP: Saw pod success May 24 19:02:16.646: INFO: Pod "pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca" satisfied condition "Succeeded or Failed" May 24 19:02:16.648: INFO: Trying to get logs from node leguer-worker pod pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca container test-container: STEP: delete the pod May 24 19:02:16.662: INFO: Waiting for pod pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca to disappear May 24 19:02:16.664: INFO: Pod pod-3668a12b-57c8-408f-9cc9-2ebe6b482bca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:16.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2949" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":368,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:12.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:12.355: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"52c3f8e2-51b9-4aea-ba70-f6363f97aefd", Controller:(*bool)(0xc005222912), BlockOwnerDeletion:(*bool)(0xc005222913)}} May 24 19:02:12.373: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e7055dea-a951-407b-a502-24b202f5f19d", Controller:(*bool)(0xc005037582), BlockOwnerDeletion:(*bool)(0xc005037583)}} May 24 19:02:12.381: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"99b526dc-2a4c-449c-8c49-f98f2f54c2a1", Controller:(*bool)(0xc0050377ca), BlockOwnerDeletion:(*bool)(0xc0050377cb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:17.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6423" for this suite. • [SLOW TEST:5.092 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":13,"skipped":183,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:11.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:11.619: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 19:02:16.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2417 --namespace=crd-publish-openapi-2417 create -f -' May 24 19:02:16.555: INFO: stderr: "" May 24 19:02:16.555: INFO: stdout: "e2e-test-crd-publish-openapi-4840-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 19:02:16.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2417 --namespace=crd-publish-openapi-2417 delete e2e-test-crd-publish-openapi-4840-crds test-cr' May 24 19:02:16.679: INFO: stderr: "" May 24 19:02:16.679: INFO: stdout: "e2e-test-crd-publish-openapi-4840-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 24 19:02:16.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2417 --namespace=crd-publish-openapi-2417 apply -f -' May 24 19:02:16.952: INFO: stderr: "" May 24 19:02:16.952: INFO: stdout: "e2e-test-crd-publish-openapi-4840-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 24 19:02:16.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2417 --namespace=crd-publish-openapi-2417 delete e2e-test-crd-publish-openapi-4840-crds test-cr' May 24 19:02:17.073: INFO: stderr: "" May 24 19:02:17.073: INFO: stdout: "e2e-test-crd-publish-openapi-4840-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 24 19:02:17.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2417 explain e2e-test-crd-publish-openapi-4840-crds' May 24 19:02:17.336: INFO: stderr: "" May 24 19:02:17.336: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4840-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:21.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2417" for this suite. • [SLOW TEST:9.617 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":12,"skipped":289,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:17.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9983.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9983.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9983.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9983.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9983.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:02:21.482: INFO: DNS probes using dns-9983/dns-test-cd1a41d3-7e66-4bd6-99d9-3b6e060025ea succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:21.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9983" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":184,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:21.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Pod with a static label STEP: watching for Pod to be ready May 24 19:02:21.283: INFO: observed Pod pod-test in namespace pods-3639 in phase Pending conditions [] May 24 19:02:21.286: INFO: observed Pod pod-test in namespace pods-3639 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC }] May 24 19:02:21.299: INFO: observed Pod pod-test in namespace pods-3639 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC }] May 24 19:02:21.781: INFO: observed Pod pod-test in namespace pods-3639 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:21 +0000 UTC }] STEP: patching the Pod with a new Label and updated data May 24 19:02:22.944: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted May 24 19:02:22.969: INFO: observed event type ADDED May 24 19:02:22.969: INFO: observed event type MODIFIED May 24 19:02:22.970: INFO: observed event type MODIFIED May 24 19:02:22.970: INFO: observed event type MODIFIED May 24 19:02:22.970: INFO: observed event type MODIFIED May 24 19:02:22.970: INFO: observed event type MODIFIED May 24 19:02:22.971: INFO: observed event type MODIFIED May 24 19:02:22.971: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:22.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3639" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":13,"skipped":302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:21.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9895 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating stateful set ss in namespace statefulset-9895 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9895 May 24 19:01:21.290: INFO: Found 0 stateful pods, waiting for 1 May 24 19:01:31.323: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 24 19:01:31.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:01:31.595: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:01:31.595: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:01:31.595: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:01:31.600: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 19:01:41.605: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 19:01:41.605: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:01:41.619: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:01:41.620: INFO: ss-0 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:01:41.620: INFO: May 24 19:01:41.620: INFO: StatefulSet ss has not reached scale 3, at 1 May 24 19:01:42.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996443241s May 24 19:01:43.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991706438s May 24 19:01:44.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986910147s May 24 19:01:45.638: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982026346s May 24 19:01:46.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978153057s May 24 19:01:47.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.97330456s May 24 19:01:48.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.968772567s May 24 19:01:49.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.963692556s May 24 19:01:50.661: INFO: Verifying statefulset ss doesn't scale past 3 for another 959.119885ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9895 May 24 19:01:51.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:01:51.908: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:01:51.908: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:01:51.908: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:01:51.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:01:52.145: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 24 19:01:52.145: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:01:52.145: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:01:52.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:01:52.393: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 24 19:01:52.393: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:01:52.393: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:01:52.397: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 24 19:02:02.402: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:02.402: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:02.402: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 24 19:02:02.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:02:02.670: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:02:02.670: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:02:02.670: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:02:02.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:02:02.909: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:02:02.909: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:02:02.909: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:02:02.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9895 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:02:03.111: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:02:03.111: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:02:03.111: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:02:03.111: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:02:03.114: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 24 19:02:13.121: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 19:02:13.121: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 19:02:13.121: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 19:02:13.135: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:02:13.135: INFO: ss-0 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:02:13.135: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:13.135: INFO: ss-2 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:13.135: INFO: May 24 19:02:13.135: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 19:02:14.140: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:02:14.140: INFO: ss-0 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:02:14.140: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:14.140: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:14.140: INFO: May 24 19:02:14.140: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 19:02:15.144: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:02:15.144: INFO: ss-0 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:02:15.144: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:15.144: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:15.144: INFO: May 24 19:02:15.144: INFO: StatefulSet ss has not reached scale 0, at 3 May 24 19:02:16.148: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:02:16.148: INFO: ss-0 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:02:16.148: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:41 +0000 UTC }] May 24 19:02:16.148: INFO: May 24 19:02:16.148: INFO: StatefulSet ss has not reached scale 0, at 2 May 24 19:02:17.152: INFO: POD NODE PHASE GRACE CONDITIONS May 24 19:02:17.152: INFO: ss-0 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:02:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-24 19:01:21 +0000 UTC }] May 24 19:02:17.152: INFO: May 24 19:02:17.152: INFO: StatefulSet ss has not reached scale 0, at 1 May 24 19:02:18.155: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.979127784s May 24 19:02:19.159: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.975788871s May 24 19:02:20.162: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.971856795s May 24 19:02:21.165: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.968935276s May 24 19:02:22.169: INFO: Verifying statefulset ss doesn't scale past 0 for another 965.6887ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9895 May 24 19:02:23.173: INFO: Scaling statefulset ss to 0 May 24 19:02:23.185: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:02:23.188: INFO: Deleting all statefulset in ns statefulset-9895 May 24 19:02:23.191: INFO: Scaling statefulset ss to 0 May 24 19:02:23.202: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:02:23.205: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:23.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9895" for this suite. • [SLOW TEST:61.982 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":16,"skipped":237,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:21.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:23.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3704" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":15,"skipped":196,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:23.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:26.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7535" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":208,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:16.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating server pod server in namespace prestop-41 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-41 STEP: Deleting pre-stop pod May 24 19:02:27.757: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:27.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-41" for this suite. • [SLOW TEST:11.093 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":18,"skipped":380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:27.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:27.906: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:34.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2289" for this suite. • [SLOW TEST:6.468 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":19,"skipped":425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:34.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-83fe512a-7ada-46ce-8ea6-7443de0e7467 STEP: Creating a pod to test consume configMaps May 24 19:02:34.443: INFO: Waiting up to 5m0s for pod "pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0" in namespace "configmap-2019" to be "Succeeded or Failed" May 24 19:02:34.446: INFO: Pod "pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.9354ms May 24 19:02:36.449: INFO: Pod "pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006400282s STEP: Saw pod success May 24 19:02:36.449: INFO: Pod "pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0" satisfied condition "Succeeded or Failed" May 24 19:02:36.452: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0 container agnhost-container: STEP: delete the pod May 24 19:02:36.472: INFO: Waiting for pod pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0 to disappear May 24 19:02:36.477: INFO: Pod pod-configmaps-de7052a8-82a7-40b5-9b8f-4d7b6b4b0bb0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:36.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2019" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":467,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:36.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:02:36.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3854 version' May 24 19:02:36.627: INFO: stderr: "" May 24 19:02:36.627: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.6\", GitCommit:\"8a62859e515889f07e3e3be6a1080413f17cf2c3\", GitTreeState:\"clean\", BuildDate:\"2021-04-15T03:28:42Z\", GoVersion:\"go1.15.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.7\", GitCommit:\"132a687512d7fb058d0f5890f07d4121b3f0a2e2\", GitTreeState:\"clean\", BuildDate:\"2021-05-18T09:25:49Z\", GoVersion:\"go1.15.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:36.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3854" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":21,"skipped":474,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:26.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:39.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2360" for this suite. • [SLOW TEST:13.099 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":17,"skipped":215,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:23.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-1714 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 19:02:23.284: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 19:02:23.305: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 19:02:25.309: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:27.309: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:29.312: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:31.308: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:33.310: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:35.309: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 19:02:35.318: INFO: The status of Pod netserver-1 is Running (Ready = false) May 24 19:02:37.323: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 19:02:39.339: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 24 19:02:39.339: INFO: Breadth first check of 10.244.1.109 on host 172.18.0.7... May 24 19:02:39.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.244.1.109&port=8080&tries=1'] Namespace:pod-network-test-1714 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:39.343: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:39.476: INFO: Waiting for responses: map[] May 24 19:02:39.476: INFO: reached 10.244.1.109 after 0/1 tries May 24 19:02:39.476: INFO: Breadth first check of 10.244.2.18 on host 172.18.0.5... May 24 19:02:39.479: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.244.2.18&port=8080&tries=1'] Namespace:pod-network-test-1714 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:02:39.479: INFO: >>> kubeConfig: /root/.kube/config May 24 19:02:39.599: INFO: Waiting for responses: map[] May 24 19:02:39.599: INFO: reached 10.244.2.18 after 0/1 tries May 24 19:02:39.599: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:39.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1714" for this suite. • [SLOW TEST:16.356 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:39.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating api versions May 24 19:02:39.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8541 api-versions' May 24 19:02:39.845: INFO: stderr: "" May 24 19:02:39.845: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nk8s.cni.cncf.io/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nprojectcontour.io/v1\nprojectcontour.io/v1alpha1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:39.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8541" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":18,"skipped":292,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:36.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:02:37.429: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:02:40.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:40.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6087" for this suite. STEP: Destroying namespace "webhook-6087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":22,"skipped":482,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:40.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:40.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-280" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":23,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:39.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-0f38dead-deba-4ae1-b620-30047e91af18 STEP: Creating a pod to test consume configMaps May 24 19:02:39.903: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7" in namespace "projected-8183" to be "Succeeded or Failed" May 24 19:02:39.906: INFO: Pod "pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793569ms May 24 19:02:41.910: INFO: Pod "pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006821172s May 24 19:02:43.914: INFO: Pod "pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010308377s STEP: Saw pod success May 24 19:02:43.914: INFO: Pod "pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7" satisfied condition "Succeeded or Failed" May 24 19:02:43.917: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7 container agnhost-container: STEP: delete the pod May 24 19:02:43.932: INFO: Waiting for pod pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7 to disappear May 24 19:02:43.936: INFO: Pod pod-projected-configmaps-769516fd-7d42-4002-b49e-87da09c072e7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:43.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8183" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":293,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:23.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9370 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9370;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9370 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9370;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9370.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9370.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9370.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9370.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9370.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9370.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9370.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9370.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 61.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.61_udp@PTR;check="$$(dig +tcp +noall +answer +search 61.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.61_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9370 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9370;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9370 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9370;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9370.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9370.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9370.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9370.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9370.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9370.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9370.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9370.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9370.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 61.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.61_udp@PTR;check="$$(dig +tcp +noall +answer +search 61.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.61_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:02:25.156: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.160: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.179: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.210: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.214: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.218: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.226: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:25.265: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc wheezy_udp@_http._tcp.dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:30.269: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.273: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.276: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.279: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.286: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.316: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.320: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.323: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.327: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.330: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.423: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:30.455: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:35.270: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.274: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.322: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.325: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.329: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.339: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:35.373: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:40.270: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.273: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.290: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.316: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.319: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.322: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.325: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.328: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.331: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:40.359: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:45.270: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.274: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.288: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.323: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.326: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.330: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.334: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.338: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.341: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:45.372: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:50.270: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.274: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.278: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.286: INFO: Unable to read wheezy_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.289: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.324: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.328: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.331: INFO: Unable to read jessie_udp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.335: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370 from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.339: INFO: Unable to read jessie_udp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-9370.svc from pod dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12: the server could not find the requested resource (get pods dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12) May 24 19:02:50.370: INFO: Lookups using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9370 wheezy_tcp@dns-test-service.dns-9370 wheezy_udp@dns-test-service.dns-9370.svc wheezy_tcp@dns-test-service.dns-9370.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9370 jessie_tcp@dns-test-service.dns-9370 jessie_udp@dns-test-service.dns-9370.svc jessie_tcp@dns-test-service.dns-9370.svc] May 24 19:02:55.364: INFO: DNS probes using dns-9370/dns-test-25de6d8b-6973-41f5-ac07-70e3f7bcfc12 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:55.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9370" for this suite. • [SLOW TEST:32.319 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":360,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:55.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-78324199-d9a3-4ed7-bf98-7eb77c3e2760 STEP: Creating a pod to test consume configMaps May 24 19:02:55.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4" in namespace "projected-9257" to be "Succeeded or Failed" May 24 19:02:55.468: INFO: Pod "pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550242ms May 24 19:02:57.472: INFO: Pod "pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006657729s STEP: Saw pod success May 24 19:02:57.472: INFO: Pod "pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4" satisfied condition "Succeeded or Failed" May 24 19:02:57.475: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4 container agnhost-container: STEP: delete the pod May 24 19:02:57.490: INFO: Waiting for pod pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4 to disappear May 24 19:02:57.493: INFO: Pod pod-projected-configmaps-41867197-83e2-4305-bc36-c3d5a5723bf4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9257" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":372,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:57.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-e86c91dd-e75c-4655-8d94-bf1e518c8213 STEP: Creating a pod to test consume configMaps May 24 19:02:57.578: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da" in namespace "projected-5600" to be "Succeeded or Failed" May 24 19:02:57.581: INFO: Pod "pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863168ms May 24 19:02:59.585: INFO: Pod "pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00642934s STEP: Saw pod success May 24 19:02:59.585: INFO: Pod "pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da" satisfied condition "Succeeded or Failed" May 24 19:02:59.587: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da container agnhost-container: STEP: delete the pod May 24 19:02:59.603: INFO: Waiting for pod pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da to disappear May 24 19:02:59.606: INFO: Pod pod-projected-configmaps-64cf7fff-b4c8-4f7d-a3b5-1d9ce92405da no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:02:59.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5600" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":386,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:40.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-6922 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 19:02:40.615: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 19:02:40.633: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 19:02:42.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 19:02:44.637: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:46.638: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:48.638: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:50.637: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:52.637: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:54.638: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:56.637: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:02:58.638: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:00.637: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 19:03:00.642: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 19:03:02.662: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 24 19:03:02.662: INFO: Breadth first check of 10.244.1.118 on host 172.18.0.7... May 24 19:03:02.665: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.32:9080/dial?request=hostname&protocol=udp&host=10.244.1.118&port=8081&tries=1'] Namespace:pod-network-test-6922 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:03:02.665: INFO: >>> kubeConfig: /root/.kube/config May 24 19:03:02.808: INFO: Waiting for responses: map[] May 24 19:03:02.808: INFO: reached 10.244.1.118 after 0/1 tries May 24 19:03:02.808: INFO: Breadth first check of 10.244.2.28 on host 172.18.0.5... May 24 19:03:02.811: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.32:9080/dial?request=hostname&protocol=udp&host=10.244.2.28&port=8081&tries=1'] Namespace:pod-network-test-6922 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:03:02.812: INFO: >>> kubeConfig: /root/.kube/config May 24 19:03:02.943: INFO: Waiting for responses: map[] May 24 19:03:02.943: INFO: reached 10.244.2.28 after 0/1 tries May 24 19:03:02.943: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:02.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6922" for this suite. • [SLOW TEST:22.366 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":503,"failed":0} S ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:09.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-c32fb26e-463d-49f9-85cd-c54e05e7c298 in namespace container-probe-624 May 24 19:02:11.756: INFO: Started pod busybox-c32fb26e-463d-49f9-85cd-c54e05e7c298 in namespace container-probe-624 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:02:11.759: INFO: Initial restart count of pod busybox-c32fb26e-463d-49f9-85cd-c54e05e7c298 is 0 May 24 19:03:05.890: INFO: Restart count of pod container-probe-624/busybox-c32fb26e-463d-49f9-85cd-c54e05e7c298 is now 1 (54.131507695s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:05.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-624" for this suite. • [SLOW TEST:56.194 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":174,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:05.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating pod May 24 19:03:07.976: INFO: Pod pod-hostip-114ad18c-a0ab-4b6c-9412-14a73bd10ac8 has hostIP: 172.18.0.7 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:07.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8931" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":181,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:08.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:03:10.049: INFO: Deleting pod "var-expansion-d5fc9d27-43fc-4c6b-b9b4-4491c960d0ec" in namespace "var-expansion-3905" May 24 19:03:10.055: INFO: Wait up to 5m0s for pod "var-expansion-d5fc9d27-43fc-4c6b-b9b4-4491c960d0ec" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:18.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3905" for this suite. • [SLOW TEST:10.064 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":13,"skipped":191,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:43.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6654.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6654.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 19:02:50.041: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.045: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.049: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.052: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.065: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.068: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.072: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.076: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:50.083: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:02:55.088: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.091: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.094: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.098: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.109: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.113: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.116: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.119: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:02:55.124: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:03:00.088: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.092: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.096: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.099: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.111: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.115: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.119: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.122: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:00.130: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:03:05.088: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.092: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.096: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.100: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.112: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.116: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.120: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.124: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:05.133: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:03:10.088: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.092: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.096: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.100: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.111: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.114: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.118: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.121: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:10.129: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:03:15.088: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.092: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.096: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.101: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.113: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.116: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.120: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.124: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local from pod dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841: the server could not find the requested resource (get pods dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841) May 24 19:03:15.132: INFO: Lookups using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6654.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6654.svc.cluster.local jessie_udp@dns-test-service-2.dns-6654.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6654.svc.cluster.local] May 24 19:03:20.132: INFO: DNS probes using dns-6654/dns-test-a8a59b7d-71d2-4214-a196-cc9bae6a1841 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6654" for this suite. • [SLOW TEST:36.186 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":20,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:02.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8382 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating statefulset ss in namespace statefulset-8382 May 24 19:03:02.999: INFO: Found 0 stateful pods, waiting for 1 May 24 19:03:13.003: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:03:13.030: INFO: Deleting all statefulset in ns statefulset-8382 May 24 19:03:13.033: INFO: Scaling statefulset ss to 0 May 24 19:03:23.237: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:03:23.240: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:23.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8382" for this suite. • [SLOW TEST:20.302 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":25,"skipped":504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:59.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:27.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1258" for this suite. • [SLOW TEST:28.084 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":17,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:18.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:03:18.109: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5362 I0524 19:03:18.132579 30 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5362, replica count: 1 I0524 19:03:19.183102 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:03:20.183278 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:03:20.291: INFO: Created: latency-svc-9plqr May 24 19:03:20.296: INFO: Got endpoints: latency-svc-9plqr [13.331829ms] May 24 19:03:20.305: INFO: Created: latency-svc-rtkxh May 24 19:03:20.307: INFO: Created: latency-svc-xrsf8 May 24 19:03:20.308: INFO: Got endpoints: latency-svc-rtkxh [11.890546ms] May 24 19:03:20.310: INFO: Created: latency-svc-kfp7q May 24 19:03:20.311: INFO: Got endpoints: latency-svc-xrsf8 [13.957805ms] May 24 19:03:20.314: INFO: Created: latency-svc-cblsf May 24 19:03:20.314: INFO: Got endpoints: latency-svc-kfp7q [17.290358ms] May 24 19:03:20.317: INFO: Created: latency-svc-zgww9 May 24 19:03:20.317: INFO: Got endpoints: latency-svc-cblsf [19.687416ms] May 24 19:03:20.319: INFO: Created: latency-svc-w4rr9 May 24 19:03:20.320: INFO: Got endpoints: latency-svc-zgww9 [22.694101ms] May 24 19:03:20.321: INFO: Created: latency-svc-xjrqj May 24 19:03:20.323: INFO: Got endpoints: latency-svc-w4rr9 [26.119265ms] May 24 19:03:20.327: INFO: Created: latency-svc-ghq9r May 24 19:03:20.327: INFO: Got endpoints: latency-svc-xjrqj [29.882278ms] May 24 19:03:20.330: INFO: Got endpoints: latency-svc-ghq9r [33.61185ms] May 24 19:03:20.332: INFO: Created: latency-svc-6nr4t May 24 19:03:20.337: INFO: Got endpoints: latency-svc-6nr4t [39.918548ms] May 24 19:03:20.337: INFO: Created: latency-svc-l4rdp May 24 19:03:20.340: INFO: Got endpoints: latency-svc-l4rdp [43.187355ms] May 24 19:03:20.345: INFO: Created: latency-svc-nz9hk May 24 19:03:20.348: INFO: Created: latency-svc-r7mbc May 24 19:03:20.349: INFO: Got endpoints: latency-svc-nz9hk [51.610455ms] May 24 19:03:20.351: INFO: Created: latency-svc-2crb2 May 24 19:03:20.351: INFO: Got endpoints: latency-svc-r7mbc [54.486899ms] May 24 19:03:20.354: INFO: Created: latency-svc-c92fp May 24 19:03:20.355: INFO: Got endpoints: latency-svc-2crb2 [57.67612ms] May 24 19:03:20.356: INFO: Created: latency-svc-7k6jj May 24 19:03:20.357: INFO: Got endpoints: latency-svc-c92fp [60.474971ms] May 24 19:03:20.359: INFO: Got endpoints: latency-svc-7k6jj [61.780035ms] May 24 19:03:20.359: INFO: Created: latency-svc-xmdxd May 24 19:03:20.362: INFO: Created: latency-svc-t5h5n May 24 19:03:20.364: INFO: Got endpoints: latency-svc-xmdxd [55.687635ms] May 24 19:03:20.365: INFO: Created: latency-svc-4ll5j May 24 19:03:20.365: INFO: Got endpoints: latency-svc-t5h5n [54.312072ms] May 24 19:03:20.367: INFO: Got endpoints: latency-svc-4ll5j [53.212018ms] May 24 19:03:20.368: INFO: Created: latency-svc-ctq72 May 24 19:03:20.376: INFO: Got endpoints: latency-svc-ctq72 [17.199576ms] May 24 19:03:20.380: INFO: Created: latency-svc-hvxpv May 24 19:03:20.383: INFO: Created: latency-svc-7zstz May 24 19:03:20.383: INFO: Got endpoints: latency-svc-hvxpv [66.063676ms] May 24 19:03:20.385: INFO: Got endpoints: latency-svc-7zstz [65.2053ms] May 24 19:03:20.386: INFO: Created: latency-svc-6bdk2 May 24 19:03:20.387: INFO: Created: latency-svc-6wswd May 24 19:03:20.388: INFO: Got endpoints: latency-svc-6bdk2 [64.782212ms] May 24 19:03:20.390: INFO: Created: latency-svc-rwxn2 May 24 19:03:20.390: INFO: Got endpoints: latency-svc-6wswd [63.740457ms] May 24 19:03:20.392: INFO: Created: latency-svc-5lfcv May 24 19:03:20.392: INFO: Got endpoints: latency-svc-rwxn2 [61.686021ms] May 24 19:03:20.394: INFO: Created: latency-svc-s8x4t May 24 19:03:20.395: INFO: Got endpoints: latency-svc-5lfcv [57.792924ms] May 24 19:03:20.396: INFO: Created: latency-svc-xmwv6 May 24 19:03:20.397: INFO: Got endpoints: latency-svc-s8x4t [56.358676ms] May 24 19:03:20.398: INFO: Created: latency-svc-9xkkx May 24 19:03:20.399: INFO: Got endpoints: latency-svc-xmwv6 [50.481026ms] May 24 19:03:20.401: INFO: Created: latency-svc-2tpmb May 24 19:03:20.402: INFO: Got endpoints: latency-svc-9xkkx [50.175486ms] May 24 19:03:20.404: INFO: Got endpoints: latency-svc-2tpmb [49.09102ms] May 24 19:03:20.404: INFO: Created: latency-svc-5rzzg May 24 19:03:20.406: INFO: Created: latency-svc-8p7fj May 24 19:03:20.407: INFO: Got endpoints: latency-svc-5rzzg [49.474754ms] May 24 19:03:20.408: INFO: Created: latency-svc-h279g May 24 19:03:20.410: INFO: Created: latency-svc-kg89d May 24 19:03:20.412: INFO: Created: latency-svc-md4c7 May 24 19:03:20.414: INFO: Created: latency-svc-m5vxd May 24 19:03:20.415: INFO: Created: latency-svc-m6dft May 24 19:03:20.417: INFO: Created: latency-svc-bwrj8 May 24 19:03:20.419: INFO: Created: latency-svc-7bw4g May 24 19:03:20.421: INFO: Created: latency-svc-nl82c May 24 19:03:20.424: INFO: Created: latency-svc-rrt22 May 24 19:03:20.427: INFO: Created: latency-svc-z4lnk May 24 19:03:20.431: INFO: Created: latency-svc-877p9 May 24 19:03:20.434: INFO: Created: latency-svc-pdv4c May 24 19:03:20.436: INFO: Created: latency-svc-qt67d May 24 19:03:20.438: INFO: Created: latency-svc-8l592 May 24 19:03:20.445: INFO: Got endpoints: latency-svc-8p7fj [80.896463ms] May 24 19:03:20.452: INFO: Created: latency-svc-whlnm May 24 19:03:20.495: INFO: Got endpoints: latency-svc-h279g [130.559298ms] May 24 19:03:20.503: INFO: Created: latency-svc-kljjc May 24 19:03:20.545: INFO: Got endpoints: latency-svc-kg89d [177.873408ms] May 24 19:03:20.554: INFO: Created: latency-svc-xhqbn May 24 19:03:20.596: INFO: Got endpoints: latency-svc-md4c7 [219.625087ms] May 24 19:03:20.604: INFO: Created: latency-svc-c2cgr May 24 19:03:20.650: INFO: Got endpoints: latency-svc-m5vxd [267.326064ms] May 24 19:03:20.657: INFO: Created: latency-svc-57wfl May 24 19:03:20.696: INFO: Got endpoints: latency-svc-m6dft [311.096508ms] May 24 19:03:20.703: INFO: Created: latency-svc-bshwd May 24 19:03:20.746: INFO: Got endpoints: latency-svc-bwrj8 [357.838106ms] May 24 19:03:20.753: INFO: Created: latency-svc-qzxqf May 24 19:03:20.797: INFO: Got endpoints: latency-svc-7bw4g [406.654396ms] May 24 19:03:20.805: INFO: Created: latency-svc-d2wks May 24 19:03:20.846: INFO: Got endpoints: latency-svc-nl82c [453.872535ms] May 24 19:03:20.854: INFO: Created: latency-svc-5jgxc May 24 19:03:20.895: INFO: Got endpoints: latency-svc-rrt22 [500.440284ms] May 24 19:03:20.903: INFO: Created: latency-svc-snq49 May 24 19:03:20.948: INFO: Got endpoints: latency-svc-z4lnk [550.939529ms] May 24 19:03:20.955: INFO: Created: latency-svc-8zzgh May 24 19:03:20.996: INFO: Got endpoints: latency-svc-877p9 [597.085243ms] May 24 19:03:21.004: INFO: Created: latency-svc-cpx6s May 24 19:03:21.046: INFO: Got endpoints: latency-svc-pdv4c [644.191012ms] May 24 19:03:21.053: INFO: Created: latency-svc-rcvcl May 24 19:03:21.098: INFO: Got endpoints: latency-svc-qt67d [694.417156ms] May 24 19:03:21.106: INFO: Created: latency-svc-vd992 May 24 19:03:21.145: INFO: Got endpoints: latency-svc-8l592 [738.590012ms] May 24 19:03:21.153: INFO: Created: latency-svc-6vpxw May 24 19:03:21.196: INFO: Got endpoints: latency-svc-whlnm [750.864803ms] May 24 19:03:21.203: INFO: Created: latency-svc-pbpvc May 24 19:03:21.246: INFO: Got endpoints: latency-svc-kljjc [750.599284ms] May 24 19:03:21.254: INFO: Created: latency-svc-wrkf2 May 24 19:03:21.295: INFO: Got endpoints: latency-svc-xhqbn [749.683258ms] May 24 19:03:21.303: INFO: Created: latency-svc-2vj7s May 24 19:03:21.346: INFO: Got endpoints: latency-svc-c2cgr [749.859258ms] May 24 19:03:21.354: INFO: Created: latency-svc-7w5zk May 24 19:03:21.396: INFO: Got endpoints: latency-svc-57wfl [745.391184ms] May 24 19:03:21.403: INFO: Created: latency-svc-xpgqf May 24 19:03:21.445: INFO: Got endpoints: latency-svc-bshwd [749.140344ms] May 24 19:03:21.453: INFO: Created: latency-svc-c856j May 24 19:03:21.497: INFO: Got endpoints: latency-svc-qzxqf [751.027001ms] May 24 19:03:21.505: INFO: Created: latency-svc-svvp9 May 24 19:03:21.545: INFO: Got endpoints: latency-svc-d2wks [747.895554ms] May 24 19:03:21.552: INFO: Created: latency-svc-fksw9 May 24 19:03:21.596: INFO: Got endpoints: latency-svc-5jgxc [749.96828ms] May 24 19:03:21.603: INFO: Created: latency-svc-n8jhk May 24 19:03:21.646: INFO: Got endpoints: latency-svc-snq49 [751.162185ms] May 24 19:03:21.654: INFO: Created: latency-svc-zw2n7 May 24 19:03:21.696: INFO: Got endpoints: latency-svc-8zzgh [748.830831ms] May 24 19:03:21.704: INFO: Created: latency-svc-9jdfx May 24 19:03:21.824: INFO: Got endpoints: latency-svc-cpx6s [827.952014ms] May 24 19:03:21.824: INFO: Got endpoints: latency-svc-rcvcl [778.425065ms] May 24 19:03:21.833: INFO: Created: latency-svc-7p4hm May 24 19:03:21.923: INFO: Created: latency-svc-72qq8 May 24 19:03:21.923: INFO: Got endpoints: latency-svc-vd992 [825.21588ms] May 24 19:03:21.924: INFO: Got endpoints: latency-svc-6vpxw [778.201135ms] May 24 19:03:21.933: INFO: Created: latency-svc-j7mfq May 24 19:03:21.939: INFO: Created: latency-svc-klzxm May 24 19:03:21.947: INFO: Got endpoints: latency-svc-pbpvc [750.890625ms] May 24 19:03:21.954: INFO: Created: latency-svc-mqwnb May 24 19:03:21.996: INFO: Got endpoints: latency-svc-wrkf2 [750.24169ms] May 24 19:03:22.004: INFO: Created: latency-svc-2wdmv May 24 19:03:22.046: INFO: Got endpoints: latency-svc-2vj7s [750.359372ms] May 24 19:03:22.053: INFO: Created: latency-svc-zzzt7 May 24 19:03:22.095: INFO: Got endpoints: latency-svc-7w5zk [749.481911ms] May 24 19:03:22.103: INFO: Created: latency-svc-wrvkx May 24 19:03:22.146: INFO: Got endpoints: latency-svc-xpgqf [749.97827ms] May 24 19:03:22.152: INFO: Created: latency-svc-7vhw4 May 24 19:03:22.196: INFO: Got endpoints: latency-svc-c856j [750.652129ms] May 24 19:03:22.206: INFO: Created: latency-svc-nqplj May 24 19:03:22.245: INFO: Got endpoints: latency-svc-svvp9 [748.116781ms] May 24 19:03:22.252: INFO: Created: latency-svc-55q7t May 24 19:03:22.296: INFO: Got endpoints: latency-svc-fksw9 [751.24347ms] May 24 19:03:22.304: INFO: Created: latency-svc-wwgxs May 24 19:03:22.345: INFO: Got endpoints: latency-svc-n8jhk [748.929875ms] May 24 19:03:22.353: INFO: Created: latency-svc-p4g9p May 24 19:03:22.395: INFO: Got endpoints: latency-svc-zw2n7 [748.451881ms] May 24 19:03:22.402: INFO: Created: latency-svc-4dmwq May 24 19:03:22.445: INFO: Got endpoints: latency-svc-9jdfx [748.814662ms] May 24 19:03:22.453: INFO: Created: latency-svc-fj4xm May 24 19:03:22.496: INFO: Got endpoints: latency-svc-7p4hm [671.296606ms] May 24 19:03:22.509: INFO: Created: latency-svc-zdhhr May 24 19:03:22.546: INFO: Got endpoints: latency-svc-72qq8 [721.662114ms] May 24 19:03:22.553: INFO: Created: latency-svc-v7cdn May 24 19:03:22.645: INFO: Got endpoints: latency-svc-j7mfq [721.684265ms] May 24 19:03:22.653: INFO: Created: latency-svc-6vkfr May 24 19:03:22.695: INFO: Got endpoints: latency-svc-klzxm [771.770149ms] May 24 19:03:22.703: INFO: Created: latency-svc-w5jvf May 24 19:03:22.746: INFO: Got endpoints: latency-svc-mqwnb [798.863847ms] May 24 19:03:22.754: INFO: Created: latency-svc-8k58g May 24 19:03:22.795: INFO: Got endpoints: latency-svc-2wdmv [798.632195ms] May 24 19:03:22.803: INFO: Created: latency-svc-8xbkz May 24 19:03:22.846: INFO: Got endpoints: latency-svc-zzzt7 [799.895084ms] May 24 19:03:22.854: INFO: Created: latency-svc-wdqbl May 24 19:03:22.896: INFO: Got endpoints: latency-svc-wrvkx [800.470821ms] May 24 19:03:22.903: INFO: Created: latency-svc-d89vj May 24 19:03:22.946: INFO: Got endpoints: latency-svc-7vhw4 [799.842519ms] May 24 19:03:22.953: INFO: Created: latency-svc-gwfcz May 24 19:03:22.996: INFO: Got endpoints: latency-svc-nqplj [800.33278ms] May 24 19:03:23.004: INFO: Created: latency-svc-8pqwp May 24 19:03:23.046: INFO: Got endpoints: latency-svc-55q7t [800.532369ms] May 24 19:03:23.053: INFO: Created: latency-svc-ht4wq May 24 19:03:23.094: INFO: Got endpoints: latency-svc-wwgxs [797.50927ms] May 24 19:03:23.101: INFO: Created: latency-svc-tzzdf May 24 19:03:23.146: INFO: Got endpoints: latency-svc-p4g9p [801.324412ms] May 24 19:03:23.154: INFO: Created: latency-svc-gcx5b May 24 19:03:23.195: INFO: Got endpoints: latency-svc-4dmwq [800.46169ms] May 24 19:03:23.203: INFO: Created: latency-svc-cmdzf May 24 19:03:23.245: INFO: Got endpoints: latency-svc-fj4xm [799.027712ms] May 24 19:03:23.251: INFO: Created: latency-svc-848wb May 24 19:03:23.296: INFO: Got endpoints: latency-svc-zdhhr [800.27114ms] May 24 19:03:23.304: INFO: Created: latency-svc-xrfzs May 24 19:03:23.346: INFO: Got endpoints: latency-svc-v7cdn [799.406039ms] May 24 19:03:23.353: INFO: Created: latency-svc-xl8q8 May 24 19:03:23.394: INFO: Got endpoints: latency-svc-6vkfr [748.220396ms] May 24 19:03:23.399: INFO: Created: latency-svc-m6ms4 May 24 19:03:23.445: INFO: Got endpoints: latency-svc-w5jvf [750.031091ms] May 24 19:03:23.453: INFO: Created: latency-svc-jfwkw May 24 19:03:23.501: INFO: Got endpoints: latency-svc-8k58g [754.846673ms] May 24 19:03:23.509: INFO: Created: latency-svc-9kb9s May 24 19:03:23.545: INFO: Got endpoints: latency-svc-8xbkz [749.692444ms] May 24 19:03:23.552: INFO: Created: latency-svc-pqdz7 May 24 19:03:23.596: INFO: Got endpoints: latency-svc-wdqbl [750.034823ms] May 24 19:03:23.603: INFO: Created: latency-svc-5l7cl May 24 19:03:23.645: INFO: Got endpoints: latency-svc-d89vj [749.343345ms] May 24 19:03:23.653: INFO: Created: latency-svc-drvst May 24 19:03:23.696: INFO: Got endpoints: latency-svc-gwfcz [750.536206ms] May 24 19:03:23.703: INFO: Created: latency-svc-dj67r May 24 19:03:23.746: INFO: Got endpoints: latency-svc-8pqwp [749.434476ms] May 24 19:03:23.753: INFO: Created: latency-svc-5fwww May 24 19:03:23.795: INFO: Got endpoints: latency-svc-ht4wq [749.566915ms] May 24 19:03:23.802: INFO: Created: latency-svc-n7tgs May 24 19:03:23.845: INFO: Got endpoints: latency-svc-tzzdf [750.897154ms] May 24 19:03:23.852: INFO: Created: latency-svc-6zrtm May 24 19:03:23.896: INFO: Got endpoints: latency-svc-gcx5b [749.377688ms] May 24 19:03:23.903: INFO: Created: latency-svc-c6ng5 May 24 19:03:23.945: INFO: Got endpoints: latency-svc-cmdzf [749.637133ms] May 24 19:03:23.953: INFO: Created: latency-svc-bmkh6 May 24 19:03:23.996: INFO: Got endpoints: latency-svc-848wb [751.727011ms] May 24 19:03:24.004: INFO: Created: latency-svc-pslmz May 24 19:03:24.046: INFO: Got endpoints: latency-svc-xrfzs [750.003498ms] May 24 19:03:24.053: INFO: Created: latency-svc-qqrcb May 24 19:03:24.145: INFO: Got endpoints: latency-svc-xl8q8 [799.871883ms] May 24 19:03:24.153: INFO: Created: latency-svc-vdkcn May 24 19:03:24.226: INFO: Got endpoints: latency-svc-m6ms4 [832.412597ms] May 24 19:03:24.237: INFO: Created: latency-svc-dlf4q May 24 19:03:24.248: INFO: Got endpoints: latency-svc-jfwkw [802.489585ms] May 24 19:03:24.255: INFO: Created: latency-svc-tp24w May 24 19:03:24.295: INFO: Got endpoints: latency-svc-9kb9s [794.469061ms] May 24 19:03:24.303: INFO: Created: latency-svc-gdj68 May 24 19:03:24.346: INFO: Got endpoints: latency-svc-pqdz7 [800.864783ms] May 24 19:03:24.353: INFO: Created: latency-svc-dd4vv May 24 19:03:24.395: INFO: Got endpoints: latency-svc-5l7cl [799.466419ms] May 24 19:03:24.403: INFO: Created: latency-svc-c4l56 May 24 19:03:24.446: INFO: Got endpoints: latency-svc-drvst [800.145364ms] May 24 19:03:24.453: INFO: Created: latency-svc-kqznr May 24 19:03:24.496: INFO: Got endpoints: latency-svc-dj67r [799.550957ms] May 24 19:03:24.503: INFO: Created: latency-svc-5fhk4 May 24 19:03:24.545: INFO: Got endpoints: latency-svc-5fwww [799.325034ms] May 24 19:03:24.553: INFO: Created: latency-svc-vnpk9 May 24 19:03:24.597: INFO: Got endpoints: latency-svc-n7tgs [801.305591ms] May 24 19:03:24.604: INFO: Created: latency-svc-lhbd7 May 24 19:03:24.645: INFO: Got endpoints: latency-svc-6zrtm [800.395541ms] May 24 19:03:24.653: INFO: Created: latency-svc-9d5l7 May 24 19:03:24.694: INFO: Got endpoints: latency-svc-c6ng5 [798.223498ms] May 24 19:03:24.700: INFO: Created: latency-svc-87pgl May 24 19:03:24.744: INFO: Got endpoints: latency-svc-bmkh6 [798.912066ms] May 24 19:03:24.749: INFO: Created: latency-svc-2ccbt May 24 19:03:24.794: INFO: Got endpoints: latency-svc-pslmz [797.601068ms] May 24 19:03:24.800: INFO: Created: latency-svc-lg5fg May 24 19:03:24.845: INFO: Got endpoints: latency-svc-qqrcb [799.054373ms] May 24 19:03:24.852: INFO: Created: latency-svc-n2x8z May 24 19:03:24.895: INFO: Got endpoints: latency-svc-vdkcn [749.902608ms] May 24 19:03:24.902: INFO: Created: latency-svc-swhwd May 24 19:03:24.946: INFO: Got endpoints: latency-svc-dlf4q [719.730561ms] May 24 19:03:24.953: INFO: Created: latency-svc-bm8fr May 24 19:03:24.996: INFO: Got endpoints: latency-svc-tp24w [747.684787ms] May 24 19:03:25.004: INFO: Created: latency-svc-98rb6 May 24 19:03:25.046: INFO: Got endpoints: latency-svc-gdj68 [750.401936ms] May 24 19:03:25.053: INFO: Created: latency-svc-xchsj May 24 19:03:25.096: INFO: Got endpoints: latency-svc-dd4vv [749.834966ms] May 24 19:03:25.103: INFO: Created: latency-svc-48vwx May 24 19:03:25.146: INFO: Got endpoints: latency-svc-c4l56 [750.266508ms] May 24 19:03:25.152: INFO: Created: latency-svc-bmbcl May 24 19:03:25.194: INFO: Got endpoints: latency-svc-kqznr [748.440842ms] May 24 19:03:25.200: INFO: Created: latency-svc-4mrfg May 24 19:03:25.245: INFO: Got endpoints: latency-svc-5fhk4 [749.184606ms] May 24 19:03:25.252: INFO: Created: latency-svc-r5pz7 May 24 19:03:25.294: INFO: Got endpoints: latency-svc-vnpk9 [748.946642ms] May 24 19:03:25.301: INFO: Created: latency-svc-p77kn May 24 19:03:25.346: INFO: Got endpoints: latency-svc-lhbd7 [749.299521ms] May 24 19:03:25.353: INFO: Created: latency-svc-fsb8r May 24 19:03:25.427: INFO: Got endpoints: latency-svc-9d5l7 [781.778106ms] May 24 19:03:25.435: INFO: Created: latency-svc-kwv5m May 24 19:03:25.445: INFO: Got endpoints: latency-svc-87pgl [750.807571ms] May 24 19:03:25.451: INFO: Created: latency-svc-vzpwp May 24 19:03:25.495: INFO: Got endpoints: latency-svc-2ccbt [751.04691ms] May 24 19:03:25.502: INFO: Created: latency-svc-8qjgx May 24 19:03:25.546: INFO: Got endpoints: latency-svc-lg5fg [751.655815ms] May 24 19:03:25.553: INFO: Created: latency-svc-7ms99 May 24 19:03:25.596: INFO: Got endpoints: latency-svc-n2x8z [750.717861ms] May 24 19:03:25.604: INFO: Created: latency-svc-j4mp4 May 24 19:03:25.646: INFO: Got endpoints: latency-svc-swhwd [750.299509ms] May 24 19:03:25.653: INFO: Created: latency-svc-gzmfq May 24 19:03:25.695: INFO: Got endpoints: latency-svc-bm8fr [749.248935ms] May 24 19:03:25.703: INFO: Created: latency-svc-9ccmq May 24 19:03:25.746: INFO: Got endpoints: latency-svc-98rb6 [750.05362ms] May 24 19:03:25.753: INFO: Created: latency-svc-jc7kk May 24 19:03:25.796: INFO: Got endpoints: latency-svc-xchsj [749.630805ms] May 24 19:03:25.803: INFO: Created: latency-svc-dqg4q May 24 19:03:25.845: INFO: Got endpoints: latency-svc-48vwx [748.981203ms] May 24 19:03:25.852: INFO: Created: latency-svc-gr266 May 24 19:03:25.896: INFO: Got endpoints: latency-svc-bmbcl [750.053867ms] May 24 19:03:25.902: INFO: Created: latency-svc-tgwhz May 24 19:03:25.996: INFO: Got endpoints: latency-svc-4mrfg [801.609381ms] May 24 19:03:26.003: INFO: Created: latency-svc-dbrj7 May 24 19:03:26.045: INFO: Got endpoints: latency-svc-r5pz7 [800.326936ms] May 24 19:03:26.053: INFO: Created: latency-svc-jt8vs May 24 19:03:26.096: INFO: Got endpoints: latency-svc-p77kn [801.531374ms] May 24 19:03:26.103: INFO: Created: latency-svc-92d9n May 24 19:03:26.145: INFO: Got endpoints: latency-svc-fsb8r [799.124748ms] May 24 19:03:26.152: INFO: Created: latency-svc-vhxc5 May 24 19:03:26.196: INFO: Got endpoints: latency-svc-kwv5m [768.172094ms] May 24 19:03:26.204: INFO: Created: latency-svc-2chrx May 24 19:03:26.245: INFO: Got endpoints: latency-svc-vzpwp [799.922549ms] May 24 19:03:26.252: INFO: Created: latency-svc-z2rmh May 24 19:03:26.295: INFO: Got endpoints: latency-svc-8qjgx [800.120984ms] May 24 19:03:26.303: INFO: Created: latency-svc-qsrcg May 24 19:03:26.345: INFO: Got endpoints: latency-svc-7ms99 [799.100039ms] May 24 19:03:26.352: INFO: Created: latency-svc-4fxvr May 24 19:03:26.399: INFO: Got endpoints: latency-svc-j4mp4 [802.597177ms] May 24 19:03:26.405: INFO: Created: latency-svc-dmsjt May 24 19:03:26.444: INFO: Got endpoints: latency-svc-gzmfq [798.544346ms] May 24 19:03:26.452: INFO: Created: latency-svc-5tm4l May 24 19:03:26.495: INFO: Got endpoints: latency-svc-9ccmq [799.534343ms] May 24 19:03:26.502: INFO: Created: latency-svc-hh8m4 May 24 19:03:26.546: INFO: Got endpoints: latency-svc-jc7kk [799.743463ms] May 24 19:03:26.553: INFO: Created: latency-svc-cn6f5 May 24 19:03:26.595: INFO: Got endpoints: latency-svc-dqg4q [799.560044ms] May 24 19:03:26.602: INFO: Created: latency-svc-6f495 May 24 19:03:26.646: INFO: Got endpoints: latency-svc-gr266 [801.169879ms] May 24 19:03:26.656: INFO: Created: latency-svc-48mbh May 24 19:03:26.695: INFO: Got endpoints: latency-svc-tgwhz [799.617948ms] May 24 19:03:26.704: INFO: Created: latency-svc-d8z9n May 24 19:03:26.746: INFO: Got endpoints: latency-svc-dbrj7 [749.745019ms] May 24 19:03:26.753: INFO: Created: latency-svc-5zxgb May 24 19:03:26.796: INFO: Got endpoints: latency-svc-jt8vs [750.725101ms] May 24 19:03:26.804: INFO: Created: latency-svc-4rpxf May 24 19:03:26.847: INFO: Got endpoints: latency-svc-92d9n [751.346443ms] May 24 19:03:26.855: INFO: Created: latency-svc-9tbxj May 24 19:03:26.896: INFO: Got endpoints: latency-svc-vhxc5 [750.87062ms] May 24 19:03:26.905: INFO: Created: latency-svc-k7knv May 24 19:03:26.945: INFO: Got endpoints: latency-svc-2chrx [749.380506ms] May 24 19:03:26.952: INFO: Created: latency-svc-cnblj May 24 19:03:27.002: INFO: Got endpoints: latency-svc-z2rmh [756.633636ms] May 24 19:03:27.013: INFO: Created: latency-svc-qm22z May 24 19:03:27.045: INFO: Got endpoints: latency-svc-qsrcg [749.823582ms] May 24 19:03:27.052: INFO: Created: latency-svc-vbm6l May 24 19:03:27.096: INFO: Got endpoints: latency-svc-4fxvr [751.198931ms] May 24 19:03:27.103: INFO: Created: latency-svc-75kkt May 24 19:03:27.146: INFO: Got endpoints: latency-svc-dmsjt [747.260723ms] May 24 19:03:27.153: INFO: Created: latency-svc-l9f7p May 24 19:03:27.197: INFO: Got endpoints: latency-svc-5tm4l [752.292465ms] May 24 19:03:27.204: INFO: Created: latency-svc-9ts2r May 24 19:03:27.246: INFO: Got endpoints: latency-svc-hh8m4 [750.572512ms] May 24 19:03:27.254: INFO: Created: latency-svc-8qrlb May 24 19:03:27.296: INFO: Got endpoints: latency-svc-cn6f5 [749.976646ms] May 24 19:03:27.303: INFO: Created: latency-svc-p8c77 May 24 19:03:27.345: INFO: Got endpoints: latency-svc-6f495 [750.007077ms] May 24 19:03:27.352: INFO: Created: latency-svc-6bc7c May 24 19:03:27.395: INFO: Got endpoints: latency-svc-48mbh [749.251565ms] May 24 19:03:27.402: INFO: Created: latency-svc-wxrpn May 24 19:03:27.445: INFO: Got endpoints: latency-svc-d8z9n [749.70225ms] May 24 19:03:27.453: INFO: Created: latency-svc-4h7tx May 24 19:03:27.496: INFO: Got endpoints: latency-svc-5zxgb [750.733186ms] May 24 19:03:27.504: INFO: Created: latency-svc-rznht May 24 19:03:27.546: INFO: Got endpoints: latency-svc-4rpxf [749.968749ms] May 24 19:03:27.553: INFO: Created: latency-svc-9l4gn May 24 19:03:27.596: INFO: Got endpoints: latency-svc-9tbxj [748.573802ms] May 24 19:03:27.603: INFO: Created: latency-svc-2dfpj May 24 19:03:27.646: INFO: Got endpoints: latency-svc-k7knv [749.855733ms] May 24 19:03:27.653: INFO: Created: latency-svc-f6zgr May 24 19:03:27.696: INFO: Got endpoints: latency-svc-cnblj [750.70044ms] May 24 19:03:27.704: INFO: Created: latency-svc-2kq5x May 24 19:03:27.745: INFO: Got endpoints: latency-svc-qm22z [743.585658ms] May 24 19:03:27.753: INFO: Created: latency-svc-nm5z6 May 24 19:03:27.796: INFO: Got endpoints: latency-svc-vbm6l [750.375118ms] May 24 19:03:27.803: INFO: Created: latency-svc-dv7qc May 24 19:03:27.846: INFO: Got endpoints: latency-svc-75kkt [749.403142ms] May 24 19:03:27.853: INFO: Created: latency-svc-7d2zl May 24 19:03:27.896: INFO: Got endpoints: latency-svc-l9f7p [749.832343ms] May 24 19:03:27.903: INFO: Created: latency-svc-56dxj May 24 19:03:27.946: INFO: Got endpoints: latency-svc-9ts2r [749.22246ms] May 24 19:03:27.954: INFO: Created: latency-svc-5r44s May 24 19:03:27.996: INFO: Got endpoints: latency-svc-8qrlb [750.109789ms] May 24 19:03:28.003: INFO: Created: latency-svc-xnkvn May 24 19:03:28.045: INFO: Got endpoints: latency-svc-p8c77 [749.660737ms] May 24 19:03:28.054: INFO: Created: latency-svc-krnjq May 24 19:03:28.094: INFO: Got endpoints: latency-svc-6bc7c [749.005113ms] May 24 19:03:28.106: INFO: Created: latency-svc-6txsp May 24 19:03:28.147: INFO: Got endpoints: latency-svc-wxrpn [751.445367ms] May 24 19:03:28.154: INFO: Created: latency-svc-mfgvx May 24 19:03:28.196: INFO: Got endpoints: latency-svc-4h7tx [750.352422ms] May 24 19:03:28.204: INFO: Created: latency-svc-7ss6s May 24 19:03:28.245: INFO: Got endpoints: latency-svc-rznht [748.967886ms] May 24 19:03:28.253: INFO: Created: latency-svc-cn8sc May 24 19:03:28.295: INFO: Got endpoints: latency-svc-9l4gn [748.911572ms] May 24 19:03:28.302: INFO: Created: latency-svc-ckpb7 May 24 19:03:28.345: INFO: Got endpoints: latency-svc-2dfpj [749.64189ms] May 24 19:03:28.396: INFO: Got endpoints: latency-svc-f6zgr [749.892854ms] May 24 19:03:28.444: INFO: Got endpoints: latency-svc-2kq5x [748.424598ms] May 24 19:03:28.495: INFO: Got endpoints: latency-svc-nm5z6 [749.213592ms] May 24 19:03:28.546: INFO: Got endpoints: latency-svc-dv7qc [749.801784ms] May 24 19:03:28.596: INFO: Got endpoints: latency-svc-7d2zl [750.6756ms] May 24 19:03:28.697: INFO: Got endpoints: latency-svc-56dxj [800.971961ms] May 24 19:03:28.745: INFO: Got endpoints: latency-svc-5r44s [799.214358ms] May 24 19:03:28.796: INFO: Got endpoints: latency-svc-xnkvn [799.568704ms] May 24 19:03:28.845: INFO: Got endpoints: latency-svc-krnjq [799.945049ms] May 24 19:03:28.896: INFO: Got endpoints: latency-svc-6txsp [801.129605ms] May 24 19:03:28.946: INFO: Got endpoints: latency-svc-mfgvx [798.983203ms] May 24 19:03:28.996: INFO: Got endpoints: latency-svc-7ss6s [800.465719ms] May 24 19:03:29.046: INFO: Got endpoints: latency-svc-cn8sc [800.299466ms] May 24 19:03:29.096: INFO: Got endpoints: latency-svc-ckpb7 [800.276319ms] May 24 19:03:29.096: INFO: Latencies: [11.890546ms 13.957805ms 17.199576ms 17.290358ms 19.687416ms 22.694101ms 26.119265ms 29.882278ms 33.61185ms 39.918548ms 43.187355ms 49.09102ms 49.474754ms 50.175486ms 50.481026ms 51.610455ms 53.212018ms 54.312072ms 54.486899ms 55.687635ms 56.358676ms 57.67612ms 57.792924ms 60.474971ms 61.686021ms 61.780035ms 63.740457ms 64.782212ms 65.2053ms 66.063676ms 80.896463ms 130.559298ms 177.873408ms 219.625087ms 267.326064ms 311.096508ms 357.838106ms 406.654396ms 453.872535ms 500.440284ms 550.939529ms 597.085243ms 644.191012ms 671.296606ms 694.417156ms 719.730561ms 721.662114ms 721.684265ms 738.590012ms 743.585658ms 745.391184ms 747.260723ms 747.684787ms 747.895554ms 748.116781ms 748.220396ms 748.424598ms 748.440842ms 748.451881ms 748.573802ms 748.814662ms 748.830831ms 748.911572ms 748.929875ms 748.946642ms 748.967886ms 748.981203ms 749.005113ms 749.140344ms 749.184606ms 749.213592ms 749.22246ms 749.248935ms 749.251565ms 749.299521ms 749.343345ms 749.377688ms 749.380506ms 749.403142ms 749.434476ms 749.481911ms 749.566915ms 749.630805ms 749.637133ms 749.64189ms 749.660737ms 749.683258ms 749.692444ms 749.70225ms 749.745019ms 749.801784ms 749.823582ms 749.832343ms 749.834966ms 749.855733ms 749.859258ms 749.892854ms 749.902608ms 749.96828ms 749.968749ms 749.976646ms 749.97827ms 750.003498ms 750.007077ms 750.031091ms 750.034823ms 750.05362ms 750.053867ms 750.109789ms 750.24169ms 750.266508ms 750.299509ms 750.352422ms 750.359372ms 750.375118ms 750.401936ms 750.536206ms 750.572512ms 750.599284ms 750.652129ms 750.6756ms 750.70044ms 750.717861ms 750.725101ms 750.733186ms 750.807571ms 750.864803ms 750.87062ms 750.890625ms 750.897154ms 751.027001ms 751.04691ms 751.162185ms 751.198931ms 751.24347ms 751.346443ms 751.445367ms 751.655815ms 751.727011ms 752.292465ms 754.846673ms 756.633636ms 768.172094ms 771.770149ms 778.201135ms 778.425065ms 781.778106ms 794.469061ms 797.50927ms 797.601068ms 798.223498ms 798.544346ms 798.632195ms 798.863847ms 798.912066ms 798.983203ms 799.027712ms 799.054373ms 799.100039ms 799.124748ms 799.214358ms 799.325034ms 799.406039ms 799.466419ms 799.534343ms 799.550957ms 799.560044ms 799.568704ms 799.617948ms 799.743463ms 799.842519ms 799.871883ms 799.895084ms 799.922549ms 799.945049ms 800.120984ms 800.145364ms 800.27114ms 800.276319ms 800.299466ms 800.326936ms 800.33278ms 800.395541ms 800.46169ms 800.465719ms 800.470821ms 800.532369ms 800.864783ms 800.971961ms 801.129605ms 801.169879ms 801.305591ms 801.324412ms 801.531374ms 801.609381ms 802.489585ms 802.597177ms 825.21588ms 827.952014ms 832.412597ms] May 24 19:03:29.096: INFO: 50 %ile: 749.976646ms May 24 19:03:29.096: INFO: 90 %ile: 800.326936ms May 24 19:03:29.096: INFO: 99 %ile: 827.952014ms May 24 19:03:29.096: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:29.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5362" for this suite. • [SLOW TEST:11.035 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":14,"skipped":192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:29.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 19:03:30.211: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:30.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4040" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":217,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:23.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3328 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3328 I0524 19:03:23.367740 21 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3328, replica count: 2 I0524 19:03:26.418259 21 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:03:26.418: INFO: Creating new exec pod May 24 19:03:29.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-3328 exec execpod9hxck -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 24 19:03:29.693: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 24 19:03:29.693: INFO: stdout: "" May 24 19:03:29.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-3328 exec execpod9hxck -- /bin/sh -x -c nc -zv -t -w 2 10.96.253.132 80' May 24 19:03:29.950: INFO: stderr: "+ nc -zv -t -w 2 10.96.253.132 80\nConnection to 10.96.253.132 80 port [tcp/http] succeeded!\n" May 24 19:03:29.950: INFO: stdout: "" May 24 19:03:29.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-3328 exec execpod9hxck -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 31237' May 24 19:03:30.188: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 31237\nConnection to 172.18.0.7 31237 port [tcp/31237] succeeded!\n" May 24 19:03:30.188: INFO: stdout: "" May 24 19:03:30.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-3328 exec execpod9hxck -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31237' May 24 19:03:30.398: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31237\nConnection to 172.18.0.5 31237 port [tcp/31237] succeeded!\n" May 24 19:03:30.399: INFO: stdout: "" May 24 19:03:30.399: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:30.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3328" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:7.108 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:10.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9769 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet May 24 19:02:10.197: INFO: Found 0 stateful pods, waiting for 3 May 24 19:02:20.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:20.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:20.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 19:02:20.229: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 24 19:02:30.262: INFO: Updating stateful set ss2 May 24 19:02:30.270: INFO: Waiting for Pod statefulset-9769/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 24 19:02:40.296: INFO: Found 1 stateful pods, waiting for 3 May 24 19:02:50.301: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:50.301: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:02:50.301: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 24 19:02:50.326: INFO: Updating stateful set ss2 May 24 19:02:50.335: INFO: Waiting for Pod statefulset-9769/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 19:03:00.363: INFO: Updating stateful set ss2 May 24 19:03:00.371: INFO: Waiting for StatefulSet statefulset-9769/ss2 to complete update May 24 19:03:00.371: INFO: Waiting for Pod statefulset-9769/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 19:03:10.378: INFO: Waiting for StatefulSet statefulset-9769/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:03:20.381: INFO: Deleting all statefulset in ns statefulset-9769 May 24 19:03:20.384: INFO: Scaling statefulset ss2 to 0 May 24 19:03:30.397: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:03:30.399: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:30.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9769" for this suite. • [SLOW TEST:80.261 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":26,"skipped":533,"failed":0} S ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":13,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:57.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-3a1b7e92-1613-453a-ae73-4379ce6998f6 STEP: Creating configMap with name cm-test-opt-upd-8c039078-3fe0-457c-b25a-1f866a46d6a6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3a1b7e92-1613-453a-ae73-4379ce6998f6 STEP: Updating configmap cm-test-opt-upd-8c039078-3fe0-457c-b25a-1f866a46d6a6 STEP: Creating configMap with name cm-test-opt-create-f8d52973-3317-4d26-b6da-97f684ba8b60 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:31.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3208" for this suite. • [SLOW TEST:94.529 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:30.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-ab5bf86f-0748-424d-af68-56924d0e220e STEP: Creating a pod to test consume configMaps May 24 19:03:30.275: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615" in namespace "projected-9235" to be "Succeeded or Failed" May 24 19:03:30.278: INFO: Pod "pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680734ms May 24 19:03:32.282: INFO: Pod "pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00617259s STEP: Saw pod success May 24 19:03:32.282: INFO: Pod "pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615" satisfied condition "Succeeded or Failed" May 24 19:03:32.284: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615 container agnhost-container: STEP: delete the pod May 24 19:03:32.302: INFO: Waiting for pod pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615 to disappear May 24 19:03:32.305: INFO: Pod pod-projected-configmaps-aa533dc4-7c44-4634-9133-fa1e556c5615 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:32.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9235" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":219,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:31.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-6986d463-6375-44c7-9ff1-d797421140da STEP: Creating a pod to test consume configMaps May 24 19:03:31.728: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a" in namespace "projected-572" to be "Succeeded or Failed" May 24 19:03:31.731: INFO: Pod "pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.154022ms May 24 19:03:33.735: INFO: Pod "pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007055249s STEP: Saw pod success May 24 19:03:33.735: INFO: Pod "pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a" satisfied condition "Succeeded or Failed" May 24 19:03:33.739: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a container agnhost-container: STEP: delete the pod May 24 19:03:33.753: INFO: Waiting for pod pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a to disappear May 24 19:03:33.756: INFO: Pod pod-projected-configmaps-ad263010-61a5-4526-b9e9-43bfe2fe796a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:33.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-572" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":265,"failed":0} SSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:30.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod May 24 19:03:30.477: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:33.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-840" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":14,"skipped":323,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod May 24 19:03:30.316: INFO: Successfully updated pod "annotationupdatec71a1d66-fca5-4ab1-be6e-69d194b17bc7" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:34.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6450" for this suite. • [SLOW TEST:6.697 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:30.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-476b5f97-7fc9-49c4-9dc0-0985bdbe46d7 STEP: Creating secret with name s-test-opt-upd-bd1a7288-6558-452c-a811-3b3c0578599a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-476b5f97-7fc9-49c4-9dc0-0985bdbe46d7 STEP: Updating secret s-test-opt-upd-bd1a7288-6558-452c-a811-3b3c0578599a STEP: Creating secret with name s-test-opt-create-d9795f71-b580-4db1-9a74-95d4682375dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:34.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":584,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:32.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 24 19:03:32.372: INFO: Pod name pod-release: Found 0 pods out of 1 May 24 19:03:37.375: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:38.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8182" for this suite. • [SLOW TEST:6.066 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":17,"skipped":229,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:34.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-f49f628e-224a-41be-8279-aea868384949 STEP: Creating a pod to test consume configMaps May 24 19:03:34.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925" in namespace "configmap-7202" to be "Succeeded or Failed" May 24 19:03:34.539: INFO: Pod "pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.506551ms May 24 19:03:36.542: INFO: Pod "pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005841862s May 24 19:03:38.545: INFO: Pod "pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008775511s STEP: Saw pod success May 24 19:03:38.545: INFO: Pod "pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925" satisfied condition "Succeeded or Failed" May 24 19:03:38.548: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925 container agnhost-container: STEP: delete the pod May 24 19:03:38.562: INFO: Waiting for pod pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925 to disappear May 24 19:03:38.565: INFO: Pod pod-configmaps-d3931595-42b1-41ed-80bf-cfaa48302925 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:38.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7202" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":446,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:38.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-3c19a400-693e-4325-aaa9-d89c684c0d7c STEP: Creating a pod to test consume configMaps May 24 19:03:38.619: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8" in namespace "configmap-425" to be "Succeeded or Failed" May 24 19:03:38.622: INFO: Pod "pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892128ms May 24 19:03:40.625: INFO: Pod "pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006101874s STEP: Saw pod success May 24 19:03:40.625: INFO: Pod "pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8" satisfied condition "Succeeded or Failed" May 24 19:03:40.628: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8 container configmap-volume-test: STEP: delete the pod May 24 19:03:40.642: INFO: Waiting for pod pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8 to disappear May 24 19:03:40.644: INFO: Pod pod-configmaps-fb3f5a69-69e1-400f-8824-5c9ba334e2c8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:40.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-425" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":450,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:34.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:03:34.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17" in namespace "projected-9620" to be "Succeeded or Failed" May 24 19:03:34.783: INFO: Pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.891202ms May 24 19:03:36.788: INFO: Pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007785065s May 24 19:03:38.791: INFO: Pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010445282s May 24 19:03:40.795: INFO: Pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014341199s STEP: Saw pod success May 24 19:03:40.795: INFO: Pod "downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17" satisfied condition "Succeeded or Failed" May 24 19:03:40.798: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17 container client-container: STEP: delete the pod May 24 19:03:40.813: INFO: Waiting for pod downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17 to disappear May 24 19:03:40.815: INFO: Pod downwardapi-volume-cd60a0e2-f4cd-436c-a304-05cfe75f9c17 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:40.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9620" for this suite. • [SLOW TEST:6.079 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":592,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:20.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-2264 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 19:03:20.237: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 19:03:20.261: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 19:03:22.265: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:24.265: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:26.271: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:28.266: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:30.265: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:32.266: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:03:34.265: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 19:03:34.275: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 19:03:40.302: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 24 19:03:40.302: INFO: Going to poll 10.244.1.125 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 24 19:03:40.305: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.125 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2264 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:03:40.305: INFO: >>> kubeConfig: /root/.kube/config May 24 19:03:41.420: INFO: Found all 1 expected endpoints: [netserver-0] May 24 19:03:41.420: INFO: Going to poll 10.244.2.34 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 24 19:03:41.424: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.34 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2264 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:03:41.424: INFO: >>> kubeConfig: /root/.kube/config May 24 19:03:42.517: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:42.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2264" for this suite. • [SLOW TEST:22.321 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:42.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 24 19:03:42.622: INFO: starting watch STEP: patching STEP: updating May 24 19:03:42.632: INFO: waiting for watch events with expected annotations May 24 19:03:42.632: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:42.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6262" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":22,"skipped":345,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:40.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:03:40.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0" in namespace "projected-6523" to be "Succeeded or Failed" May 24 19:03:40.880: INFO: Pod "downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924365ms May 24 19:03:42.884: INFO: Pod "downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006311844s STEP: Saw pod success May 24 19:03:42.884: INFO: Pod "downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0" satisfied condition "Succeeded or Failed" May 24 19:03:42.887: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0 container client-container: STEP: delete the pod May 24 19:03:42.900: INFO: Waiting for pod downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0 to disappear May 24 19:03:42.903: INFO: Pod downwardapi-volume-c8ddda1f-d4a8-4d1c-8cfb-1af19fd1c6c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:42.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6523" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":598,"failed":0} [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:42.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:42.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7069" for this suite. • ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:33.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod May 24 19:03:34.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 create -f -' May 24 19:03:34.446: INFO: stderr: "" May 24 19:03:34.446: INFO: stdout: "pod/pause created\n" May 24 19:03:34.446: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 24 19:03:34.446: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3748" to be "running and ready" May 24 19:03:34.449: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72049ms May 24 19:03:36.452: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006378678s May 24 19:03:38.456: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009997888s May 24 19:03:40.460: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013792429s May 24 19:03:42.463: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.017086601s May 24 19:03:42.463: INFO: Pod "pause" satisfied condition "running and ready" May 24 19:03:42.464: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: adding the label testing-label with value testing-label-value to a pod May 24 19:03:42.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 label pods pause testing-label=testing-label-value' May 24 19:03:42.599: INFO: stderr: "" May 24 19:03:42.599: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 24 19:03:42.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 get pod pause -L testing-label' May 24 19:03:42.718: INFO: stderr: "" May 24 19:03:42.718: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod May 24 19:03:42.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 label pods pause testing-label-' May 24 19:03:42.840: INFO: stderr: "" May 24 19:03:42.840: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 24 19:03:42.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 get pod pause -L testing-label' May 24 19:03:42.958: INFO: stderr: "" May 24 19:03:42.958: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources May 24 19:03:42.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 delete --grace-period=0 --force -f -' May 24 19:03:43.079: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:43.079: INFO: stdout: "pod \"pause\" force deleted\n" May 24 19:03:43.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 get rc,svc -l name=pause --no-headers' May 24 19:03:43.204: INFO: stderr: "No resources found in kubectl-3748 namespace.\n" May 24 19:03:43.204: INFO: stdout: "" May 24 19:03:43.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3748 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 19:03:43.322: INFO: stderr: "" May 24 19:03:43.322: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3748" for this suite. • [SLOW TEST:9.361 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":15,"skipped":336,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:43.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4864" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":16,"skipped":347,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:42.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-4992536a-ac41-4323-82a2-ab1ebb7be1d7 STEP: Creating a pod to test consume secrets May 24 19:03:42.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d" in namespace "projected-7221" to be "Succeeded or Failed" May 24 19:03:42.739: INFO: Pod "pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.663237ms May 24 19:03:44.743: INFO: Pod "pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006476506s May 24 19:03:46.837: INFO: Pod "pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100515061s STEP: Saw pod success May 24 19:03:46.837: INFO: Pod "pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d" satisfied condition "Succeeded or Failed" May 24 19:03:46.840: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d container projected-secret-volume-test: STEP: delete the pod May 24 19:03:46.932: INFO: Waiting for pod pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d to disappear May 24 19:03:47.025: INFO: Pod pod-projected-secrets-60853c53-a77d-4d89-b594-82636339d48d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:47.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7221" for this suite. • ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:40.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3373" for this suite. • [SLOW TEST:6.343 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":359,"failed":0} S ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":21,"skipped":471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:02:39.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0524 19:02:45.383281 18 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:03:47.523: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:47.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7260" for this suite. • [SLOW TEST:68.313 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":18,"skipped":230,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:38.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:03:38.847: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 24 19:03:40.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479818, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479818, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479818, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479818, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:03:43.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 24 19:03:44.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 24 19:03:45.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 24 19:03:46.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 24 19:03:47.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:48.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8268" for this suite. STEP: Destroying namespace "webhook-8268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.856 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":18,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:43.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 24 19:03:43.830: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:03:43.844: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:03:46.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 May 24 19:03:47.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:03:47.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:49.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2962" for this suite. STEP: Destroying namespace "webhook-2962-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.886 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":17,"skipped":354,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:47.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-b311bde6-d389-4ea4-a850-be1d337ccc34 STEP: Creating a pod to test consume configMaps May 24 19:03:47.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d" in namespace "configmap-5759" to be "Succeeded or Failed" May 24 19:03:47.690: INFO: Pod "pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.639146ms May 24 19:03:49.694: INFO: Pod "pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00671287s STEP: Saw pod success May 24 19:03:49.694: INFO: Pod "pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d" satisfied condition "Succeeded or Failed" May 24 19:03:49.698: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d container agnhost-container: STEP: delete the pod May 24 19:03:49.714: INFO: Waiting for pod pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d to disappear May 24 19:03:49.717: INFO: Pod pod-configmaps-7871a4d2-8366-4eda-a787-62ef5958e93d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:49.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5759" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":239,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:47.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars May 24 19:03:47.114: INFO: Waiting up to 5m0s for pod "downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202" in namespace "downward-api-3088" to be "Succeeded or Failed" May 24 19:03:47.117: INFO: Pod "downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657236ms May 24 19:03:49.121: INFO: Pod "downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00689434s May 24 19:03:51.125: INFO: Pod "downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010684315s STEP: Saw pod success May 24 19:03:51.125: INFO: Pod "downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202" satisfied condition "Succeeded or Failed" May 24 19:03:51.128: INFO: Trying to get logs from node leguer-worker pod downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202 container dapi-container: STEP: delete the pod May 24 19:03:51.145: INFO: Waiting for pod downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202 to disappear May 24 19:03:51.148: INFO: Pod downward-api-2abf8fe6-2242-4892-9d2e-39416eb66202 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:51.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3088" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":381,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:49.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 19:03:49.368: INFO: Waiting up to 5m0s for pod "pod-edb63485-5475-42cd-aa74-9eaa336afa5e" in namespace "emptydir-9735" to be "Succeeded or Failed" May 24 19:03:49.371: INFO: Pod "pod-edb63485-5475-42cd-aa74-9eaa336afa5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741991ms May 24 19:03:51.375: INFO: Pod "pod-edb63485-5475-42cd-aa74-9eaa336afa5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006507641s STEP: Saw pod success May 24 19:03:51.375: INFO: Pod "pod-edb63485-5475-42cd-aa74-9eaa336afa5e" satisfied condition "Succeeded or Failed" May 24 19:03:51.378: INFO: Trying to get logs from node leguer-worker2 pod pod-edb63485-5475-42cd-aa74-9eaa336afa5e container test-container: STEP: delete the pod May 24 19:03:51.393: INFO: Waiting for pod pod-edb63485-5475-42cd-aa74-9eaa336afa5e to disappear May 24 19:03:51.396: INFO: Pod pod-edb63485-5475-42cd-aa74-9eaa336afa5e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:51.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9735" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:47.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:58.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8171" for this suite. • [SLOW TEST:11.077 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":22,"skipped":500,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:51.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components May 24 19:03:51.484: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 24 19:03:51.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:51.762: INFO: stderr: "" May 24 19:03:51.762: INFO: stdout: "service/agnhost-replica created\n" May 24 19:03:51.762: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 24 19:03:51.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:52.040: INFO: stderr: "" May 24 19:03:52.040: INFO: stdout: "service/agnhost-primary created\n" May 24 19:03:52.041: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 24 19:03:52.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:52.312: INFO: stderr: "" May 24 19:03:52.312: INFO: stdout: "service/frontend created\n" May 24 19:03:52.312: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 24 19:03:52.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:52.569: INFO: stderr: "" May 24 19:03:52.569: INFO: stdout: "deployment.apps/frontend created\n" May 24 19:03:52.569: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 19:03:52.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:52.852: INFO: stderr: "" May 24 19:03:52.852: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 24 19:03:52.853: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 19:03:52.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 create -f -' May 24 19:03:53.148: INFO: stderr: "" May 24 19:03:53.148: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 24 19:03:53.148: INFO: Waiting for all frontend pods to be Running. May 24 19:03:58.198: INFO: Waiting for frontend to serve content. May 24 19:03:58.210: INFO: Trying to add a new entry to the guestbook. May 24 19:03:58.222: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 24 19:03:58.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:58.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:58.367: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 24 19:03:58.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:58.487: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:58.487: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 24 19:03:58.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:58.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:58.614: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 19:03:58.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:58.736: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:58.736: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 19:03:58.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:58.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:58.931: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 24 19:03:58.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-7210 delete --grace-period=0 --force -f -' May 24 19:03:59.051: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 19:03:59.051: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:59.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7210" for this suite. • [SLOW TEST:7.608 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":19,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":30,"skipped":598,"failed":0} [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:43.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 19:03:49.078: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:49.081: INFO: Pod pod-with-poststart-exec-hook still exists May 24 19:03:51.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:51.086: INFO: Pod pod-with-poststart-exec-hook still exists May 24 19:03:53.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:53.085: INFO: Pod pod-with-poststart-exec-hook still exists May 24 19:03:55.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:55.085: INFO: Pod pod-with-poststart-exec-hook still exists May 24 19:03:57.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:57.085: INFO: Pod pod-with-poststart-exec-hook still exists May 24 19:03:59.081: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 19:03:59.085: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:59.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9266" for this suite. • [SLOW TEST:16.094 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:40.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 in namespace container-probe-2047 May 24 19:01:42.931: INFO: Started pod liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 in namespace container-probe-2047 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:01:42.934: INFO: Initial restart count of pod liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is 0 May 24 19:02:01.026: INFO: Restart count of pod container-probe-2047/liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is now 1 (18.09215469s elapsed) May 24 19:02:19.098: INFO: Restart count of pod container-probe-2047/liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is now 2 (36.163289779s elapsed) May 24 19:02:39.197: INFO: Restart count of pod container-probe-2047/liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is now 3 (56.263082881s elapsed) May 24 19:02:59.297: INFO: Restart count of pod container-probe-2047/liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is now 4 (1m16.363141521s elapsed) May 24 19:03:59.651: INFO: Restart count of pod container-probe-2047/liveness-49647f16-91bc-425e-a5ee-9c16fed8ef29 is now 5 (2m16.716679096s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:03:59.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2047" for this suite. • [SLOW TEST:138.785 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":363,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:59.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 19:03:59.244: INFO: Waiting up to 5m0s for pod "pod-261e857b-535a-4de8-b837-37acaeae5242" in namespace "emptydir-6789" to be "Succeeded or Failed" May 24 19:03:59.248: INFO: Pod "pod-261e857b-535a-4de8-b837-37acaeae5242": Phase="Pending", Reason="", readiness=false. Elapsed: 4.58304ms May 24 19:04:01.253: INFO: Pod "pod-261e857b-535a-4de8-b837-37acaeae5242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009640517s May 24 19:04:03.257: INFO: Pod "pod-261e857b-535a-4de8-b837-37acaeae5242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013128236s STEP: Saw pod success May 24 19:04:03.257: INFO: Pod "pod-261e857b-535a-4de8-b837-37acaeae5242" satisfied condition "Succeeded or Failed" May 24 19:04:03.260: INFO: Trying to get logs from node leguer-worker pod pod-261e857b-535a-4de8-b837-37acaeae5242 container test-container: STEP: delete the pod May 24 19:04:03.274: INFO: Waiting for pod pod-261e857b-535a-4de8-b837-37acaeae5242 to disappear May 24 19:04:03.277: INFO: Pod pod-261e857b-535a-4de8-b837-37acaeae5242 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:03.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6789" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":617,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:03.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 19:04:03.326: INFO: Waiting up to 5m0s for pod "pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df" in namespace "emptydir-52" to be "Succeeded or Failed" May 24 19:04:03.329: INFO: Pod "pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660383ms May 24 19:04:05.333: INFO: Pod "pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006662892s STEP: Saw pod success May 24 19:04:05.333: INFO: Pod "pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df" satisfied condition "Succeeded or Failed" May 24 19:04:05.336: INFO: Trying to get logs from node leguer-worker2 pod pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df container test-container: STEP: delete the pod May 24 19:04:05.352: INFO: Waiting for pod pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df to disappear May 24 19:04:05.356: INFO: Pod pod-adfd0163-3bf2-4297-8ab7-324ad2d8b4df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:05.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-52" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:49.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:06.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3588" for this suite. • [SLOW TEST:17.074 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":20,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:06.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:10.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3131" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:51.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9017 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9017 STEP: creating replication controller externalsvc in namespace services-9017 I0524 19:03:51.219987 32 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9017, replica count: 2 I0524 19:03:54.270506 32 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 24 19:03:54.288: INFO: Creating new exec pod May 24 19:03:56.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-9017 exec execpodr2x8n -- /bin/sh -x -c nslookup nodeport-service.services-9017.svc.cluster.local' May 24 19:03:56.602: INFO: stderr: "+ nslookup nodeport-service.services-9017.svc.cluster.local\n" May 24 19:03:56.602: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9017.svc.cluster.local\tcanonical name = externalsvc.services-9017.svc.cluster.local.\nName:\texternalsvc.services-9017.svc.cluster.local\nAddress: 10.96.87.30\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9017, will wait for the garbage collector to delete the pods May 24 19:03:56.663: INFO: Deleting ReplicationController externalsvc took: 6.997159ms May 24 19:03:56.763: INFO: Terminating ReplicationController externalsvc pods took: 100.403517ms May 24 19:04:10.977: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:10.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9017" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:19.836 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:11.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:04:11.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 create -f -' May 24 19:04:11.342: INFO: stderr: "" May 24 19:04:11.342: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 24 19:04:11.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 create -f -' May 24 19:04:11.605: INFO: stderr: "" May 24 19:04:11.605: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 24 19:04:12.609: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:04:12.609: INFO: Found 1 / 1 May 24 19:04:12.609: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 19:04:12.612: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:04:12.613: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 19:04:12.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 describe pod agnhost-primary-6rt2l' May 24 19:04:12.752: INFO: stderr: "" May 24 19:04:12.752: INFO: stdout: "Name: agnhost-primary-6rt2l\nNamespace: kubectl-6160\nPriority: 0\nNode: leguer-worker2/172.18.0.5\nStart Time: Mon, 24 May 2021 19:04:11 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.62\"\n ],\n \"mac\": \"ae:be:fe:04:2c:9e\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.2.62\"\n ],\n \"mac\": \"ae:be:fe:04:2c:9e\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.244.2.62\nIPs:\n IP: 10.244.2.62\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://b8a3c5b8fc175f2f20d1a7a63179940c880bc5d41e9314ca8702c437824d9869\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 24 May 2021 19:04:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dh4pt (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dh4pt:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dh4pt\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-6160/agnhost-primary-6rt2l to leguer-worker2\n Normal AddedInterface 1s multus Add eth0 [10.244.2.62/24]\n Normal Pulled 0s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 0s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" May 24 19:04:12.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 describe rc agnhost-primary' May 24 19:04:12.897: INFO: stderr: "" May 24 19:04:12.898: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6160\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-6rt2l\n" May 24 19:04:12.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 describe service agnhost-primary' May 24 19:04:13.030: INFO: stderr: "" May 24 19:04:13.031: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6160\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.96.228.121\nIPs: 10.96.228.121\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.62:6379\nSession Affinity: None\nEvents: \n" May 24 19:04:13.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 describe node leguer-control-plane' May 24 19:04:13.200: INFO: stderr: "" May 24 19:04:13.200: INFO: stdout: "Name: leguer-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n ingress-ready=true\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 22 May 2021 08:23:02 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Mon, 24 May 2021 19:04:05 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 24 May 2021 19:02:55 +0000 Sat, 22 May 2021 08:22:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 24 May 2021 19:02:55 +0000 Sat, 22 May 2021 08:22:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 24 May 2021 19:02:55 +0000 Sat, 22 May 2021 08:22:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 24 May 2021 19:02:55 +0000 Sat, 22 May 2021 08:23:36 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.6\n Hostname: leguer-control-plane\nCapacity:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nAllocatable:\n cpu: 88\n ephemeral-storage: 459602040Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65849824Ki\n pods: 110\nSystem Info:\n Machine ID: cd6232015d5d4123a4f981fce21e3374\n System UUID: eba32c45-894e-4080-80ed-6ad2fd75cb06\n Boot ID: 8e840902-9ac1-4acc-b00a-3731226c7bea\n Kernel Version: 5.4.0-73-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.1\n Kubelet Version: v1.20.7\n Kube-Proxy Version: v1.20.7\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/leguer/leguer-control-plane\nNon-terminated Pods: (14 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system create-loop-devs-dxl2f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kube-system etcd-leguer-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 2d10h\n kube-system kindnet-8gg6p 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d10h\n kube-system kube-apiserver-leguer-control-plane 250m (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kube-system kube-controller-manager-leguer-control-plane 200m (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kube-system kube-multus-ds-bxrtj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d10h\n kube-system kube-proxy-vqm28 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kube-system tune-sysctls-s5nrx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-krkfj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n kubernetes-dashboard kubernetes-dashboard-9f9799597-x8tx5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n local-path-storage local-path-provisioner-547f784dff-pbsvl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n metallb-system speaker-gjr9t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\n projectcontour envoy-nwdcq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (0%) 200m (0%)\n memory 200Mi (0%) 100Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning SystemOOM 48m kubelet System OOM encountered, victim process: iptables, pid: 1787392\n Warning SystemOOM 48m kubelet System OOM encountered, victim process: kindnetd, pid: 1579794\n Warning SystemOOM 27m kubelet System OOM encountered, victim process: iptables, pid: 1820464\n Warning SystemOOM 21m kubelet System OOM encountered, victim process: iptables, pid: 1843161\n Warning SystemOOM 21m kubelet System OOM encountered, victim process: kindnetd, pid: 1598369\n" May 24 19:04:13.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-6160 describe namespace kubectl-6160' May 24 19:04:13.326: INFO: stderr: "" May 24 19:04:13.326: INFO: stdout: "Name: kubectl-6160\nLabels: e2e-framework=kubectl\n e2e-run=1b13c952-20ae-4c4f-a8a4-28c181a6ef70\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:13.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6160" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":22,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:13.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-5918/configmap-test-50284417-dd78-4f98-b15c-8977f2904b5f STEP: Creating a pod to test consume configMaps May 24 19:04:13.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512" in namespace "configmap-5918" to be "Succeeded or Failed" May 24 19:04:13.443: INFO: Pod "pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512": Phase="Pending", Reason="", readiness=false. Elapsed: 3.105561ms May 24 19:04:15.447: INFO: Pod "pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006715943s May 24 19:04:17.451: INFO: Pod "pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011114274s STEP: Saw pod success May 24 19:04:17.451: INFO: Pod "pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512" satisfied condition "Succeeded or Failed" May 24 19:04:17.455: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512 container env-test: STEP: delete the pod May 24 19:04:17.471: INFO: Waiting for pod pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512 to disappear May 24 19:04:17.475: INFO: Pod pod-configmaps-eedcfcc9-69b1-4d6b-9961-05724cab0512 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:17.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5918" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":352,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:17.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:04:17.549: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:18.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9846" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":24,"skipped":365,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:18.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-f0abed9e-16de-4886-a6ee-02c8285b0ef4 STEP: Creating a pod to test consume secrets May 24 19:04:18.810: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c" in namespace "projected-9766" to be "Succeeded or Failed" May 24 19:04:18.813: INFO: Pod "pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.273602ms May 24 19:04:20.819: INFO: Pod "pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00930282s STEP: Saw pod success May 24 19:04:20.819: INFO: Pod "pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c" satisfied condition "Succeeded or Failed" May 24 19:04:20.824: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c container projected-secret-volume-test: STEP: delete the pod May 24 19:04:20.840: INFO: Waiting for pod pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c to disappear May 24 19:04:20.844: INFO: Pod pod-projected-secrets-b0b1e053-9e10-42f0-ac53-f6fb224dc30c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9766" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:59.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6931 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6931 STEP: Creating statefulset with conflicting port in namespace statefulset-6931 STEP: Waiting until pod test-pod will start running in namespace statefulset-6931 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6931 May 24 19:04:03.761: INFO: Observed stateful pod in namespace: statefulset-6931, name: ss-0, uid: b7d1ec60-c8b4-441f-b4b5-4f7a978a6110, status phase: Pending. Waiting for statefulset controller to delete. May 24 19:04:05.086: INFO: Observed stateful pod in namespace: statefulset-6931, name: ss-0, uid: b7d1ec60-c8b4-441f-b4b5-4f7a978a6110, status phase: Failed. Waiting for statefulset controller to delete. May 24 19:04:05.094: INFO: Observed stateful pod in namespace: statefulset-6931, name: ss-0, uid: b7d1ec60-c8b4-441f-b4b5-4f7a978a6110, status phase: Failed. Waiting for statefulset controller to delete. May 24 19:04:05.097: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6931 STEP: Removing pod with conflicting port in namespace statefulset-6931 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6931 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:04:11.125: INFO: Deleting all statefulset in ns statefulset-6931 May 24 19:04:11.129: INFO: Scaling statefulset ss to 0 May 24 19:04:21.144: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:04:21.147: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:21.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6931" for this suite. • [SLOW TEST:21.473 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":22,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:48.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4886, will wait for the garbage collector to delete the pods May 24 19:03:52.428: INFO: Deleting Job.batch foo took: 6.444236ms May 24 19:03:52.528: INFO: Terminating Job.batch foo pods took: 100.30936ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:24.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4886" for this suite. • [SLOW TEST:36.026 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":19,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:21.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token STEP: reading a file in the container May 24 19:04:23.778: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7096 pod-service-account-718fd511-e9f5-40e5-bc86-b4713dbdef5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 24 19:04:24.039: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7096 pod-service-account-718fd511-e9f5-40e5-bc86-b4713dbdef5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 24 19:04:24.297: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7096 pod-service-account-718fd511-e9f5-40e5-bc86-b4713dbdef5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:24.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7096" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":23,"skipped":402,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:05.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1271 STEP: creating service affinity-nodeport in namespace services-1271 STEP: creating replication controller affinity-nodeport in namespace services-1271 I0524 19:04:05.472860 21 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-1271, replica count: 3 I0524 19:04:08.523427 21 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:04:11.523735 21 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:04:11.538: INFO: Creating new exec pod May 24 19:04:14.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1271 exec execpod-affinityl2z7r -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 24 19:04:14.842: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 24 19:04:14.842: INFO: stdout: "" May 24 19:04:14.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1271 exec execpod-affinityl2z7r -- /bin/sh -x -c nc -zv -t -w 2 10.96.244.23 80' May 24 19:04:15.080: INFO: stderr: "+ nc -zv -t -w 2 10.96.244.23 80\nConnection to 10.96.244.23 80 port [tcp/http] succeeded!\n" May 24 19:04:15.080: INFO: stdout: "" May 24 19:04:15.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1271 exec execpod-affinityl2z7r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 32078' May 24 19:04:15.324: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 32078\nConnection to 172.18.0.7 32078 port [tcp/32078] succeeded!\n" May 24 19:04:15.324: INFO: stdout: "" May 24 19:04:15.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1271 exec execpod-affinityl2z7r -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 32078' May 24 19:04:15.574: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 32078\nConnection to 172.18.0.5 32078 port [tcp/32078] succeeded!\n" May 24 19:04:15.574: INFO: stdout: "" May 24 19:04:15.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-1271 exec execpod-affinityl2z7r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32078/ ; done' May 24 19:04:15.926: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32078/\n" May 24 19:04:15.926: INFO: stdout: "\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz\naffinity-nodeport-pfgtz" May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Received response from host: affinity-nodeport-pfgtz May 24 19:04:15.926: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-1271, will wait for the garbage collector to delete the pods May 24 19:04:15.995: INFO: Deleting ReplicationController affinity-nodeport took: 6.187716ms May 24 19:04:16.095: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.27435ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:28.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1271" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:22.596 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":647,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:24.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod May 24 19:04:24.576: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:28.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7970" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":24,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:28.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 24 19:04:28.436: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:28.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8871" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":25,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":25,"skipped":383,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:11.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:04:11.540: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 19:04:13.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479851, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479851, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479851, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479851, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:04:16.567: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:28.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7962" for this suite. STEP: Destroying namespace "webhook-7962-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.778 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":26,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:20.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 19:04:25.087: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 19:04:25.090: INFO: Pod pod-with-poststart-http-hook still exists May 24 19:04:27.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 19:04:27.096: INFO: Pod pod-with-poststart-http-hook still exists May 24 19:04:29.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 19:04:29.095: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:29.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5652" for this suite. • [SLOW TEST:8.180 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":404,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:28.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:04:28.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8" in namespace "downward-api-7150" to be "Succeeded or Failed" May 24 19:04:28.540: INFO: Pod "downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836192ms May 24 19:04:30.544: INFO: Pod "downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006657326s STEP: Saw pod success May 24 19:04:30.544: INFO: Pod "downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8" satisfied condition "Succeeded or Failed" May 24 19:04:30.548: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8 container client-container: STEP: delete the pod May 24 19:04:30.686: INFO: Waiting for pod downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8 to disappear May 24 19:04:30.690: INFO: Pod downwardapi-volume-804e1257-111a-4b37-a722-aa75ea47ced8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:30.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7150" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":487,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:28.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 19:04:28.856: INFO: Waiting up to 5m0s for pod "pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e" in namespace "emptydir-6981" to be "Succeeded or Failed" May 24 19:04:28.859: INFO: Pod "pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.794597ms May 24 19:04:30.862: INFO: Pod "pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006468213s STEP: Saw pod success May 24 19:04:30.863: INFO: Pod "pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e" satisfied condition "Succeeded or Failed" May 24 19:04:30.866: INFO: Trying to get logs from node leguer-worker pod pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e container test-container: STEP: delete the pod May 24 19:04:30.881: INFO: Waiting for pod pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e to disappear May 24 19:04:30.883: INFO: Pod pod-00b4674f-263e-4dfa-99d8-8d9acd3e249e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:30.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6981" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":402,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:29.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 19:04:33.203: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 19:04:33.205: INFO: Pod pod-with-prestop-exec-hook still exists May 24 19:04:35.206: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 19:04:35.208: INFO: Pod pod-with-prestop-exec-hook still exists May 24 19:04:37.206: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 19:04:37.210: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:37.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7119" for this suite. • [SLOW TEST:8.095 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":418,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:37.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-55d795b4-85dc-4b0c-be97-c594fb1cd6ca STEP: Creating a pod to test consume secrets May 24 19:04:37.295: INFO: Waiting up to 5m0s for pod "pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f" in namespace "secrets-8233" to be "Succeeded or Failed" May 24 19:04:37.298: INFO: Pod "pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890123ms May 24 19:04:39.302: INFO: Pod "pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007003627s STEP: Saw pod success May 24 19:04:39.302: INFO: Pod "pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f" satisfied condition "Succeeded or Failed" May 24 19:04:39.305: INFO: Trying to get logs from node leguer-worker pod pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f container secret-volume-test: STEP: delete the pod May 24 19:04:39.318: INFO: Waiting for pod pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f to disappear May 24 19:04:39.320: INFO: Pod pod-secrets-b98a515e-4c17-4f62-98aa-8423f33edd6f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:39.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8233" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:30.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:04:30.945: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 19:04:34.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2416 --namespace=crd-publish-openapi-2416 create -f -' May 24 19:04:35.329: INFO: stderr: "" May 24 19:04:35.329: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 24 19:04:35.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2416 --namespace=crd-publish-openapi-2416 delete e2e-test-crd-publish-openapi-9535-crds test-cr' May 24 19:04:35.475: INFO: stderr: "" May 24 19:04:35.475: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 24 19:04:35.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2416 --namespace=crd-publish-openapi-2416 apply -f -' May 24 19:04:35.870: INFO: stderr: "" May 24 19:04:35.871: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 24 19:04:35.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2416 --namespace=crd-publish-openapi-2416 delete e2e-test-crd-publish-openapi-9535-crds test-cr' May 24 19:04:35.993: INFO: stderr: "" May 24 19:04:35.993: INFO: stdout: "e2e-test-crd-publish-openapi-9535-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 24 19:04:35.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2416 explain e2e-test-crd-publish-openapi-9535-crds' May 24 19:04:36.259: INFO: stderr: "" May 24 19:04:36.259: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9535-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:40.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2416" for this suite. • [SLOW TEST:9.312 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":28,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:40.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:40.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1869" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:39.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:04:40.038: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 19:04:42.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479880, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479880, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479880, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479880, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:04:45.065: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:04:45.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7948-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:46.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1014" for this suite. STEP: Destroying namespace "webhook-1014-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.033 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":29,"skipped":471,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:40.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 24 19:04:41.156: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:04:41.171: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 24 19:04:43.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479881, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479881, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479881, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479881, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:04:46.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 24 19:04:46.440: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:46.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1668" for this suite. STEP: Destroying namespace "webhook-1668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.002 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":30,"skipped":533,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:30.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod May 24 19:04:30.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 24 19:04:30.878: INFO: stderr: "" May 24 19:04:30.878: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Waiting for log generator to start. May 24 19:04:30.878: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 24 19:04:30.878: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8508" to be "running and ready, or succeeded" May 24 19:04:30.881: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.949583ms May 24 19:04:32.885: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006217114s May 24 19:04:34.888: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.009870244s May 24 19:04:34.888: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 24 19:04:34.888: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 24 19:04:34.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator' May 24 19:04:35.140: INFO: stderr: "" May 24 19:04:35.140: INFO: stdout: "I0524 19:04:31.756891 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/2tgz 232\nI0524 19:04:31.957141 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/d842 372\nI0524 19:04:32.157143 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/v68 393\nI0524 19:04:32.357113 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/g9kd 274\nI0524 19:04:32.556926 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/hg8 307\nI0524 19:04:32.757125 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6fc 383\nI0524 19:04:32.957089 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/vnfl 363\nI0524 19:04:33.157096 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/4prl 578\nI0524 19:04:33.357057 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/7n8 270\nI0524 19:04:33.557092 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/8zdj 223\nI0524 19:04:33.757011 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/gcpw 477\nI0524 19:04:33.957096 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/fw6g 400\nI0524 19:04:34.157026 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/gzs 453\nI0524 19:04:34.357041 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/nn7 580\nI0524 19:04:34.557031 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/s9qk 262\nI0524 19:04:34.757133 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/ph4t 245\nI0524 19:04:34.957087 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/7dsj 341\n" STEP: limiting log lines May 24 19:04:35.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator --tail=1' May 24 19:04:35.265: INFO: stderr: "" May 24 19:04:35.265: INFO: stdout: "I0524 19:04:35.157036 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/gzkh 428\n" May 24 19:04:35.265: INFO: got output "I0524 19:04:35.157036 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/gzkh 428\n" STEP: limiting log bytes May 24 19:04:35.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator --limit-bytes=1' May 24 19:04:35.392: INFO: stderr: "" May 24 19:04:35.392: INFO: stdout: "I" May 24 19:04:35.392: INFO: got output "I" STEP: exposing timestamps May 24 19:04:35.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator --tail=1 --timestamps' May 24 19:04:35.499: INFO: stderr: "" May 24 19:04:35.499: INFO: stdout: "2021-05-24T19:04:35.357223402Z I0524 19:04:35.357079 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/h8w 367\n" May 24 19:04:35.499: INFO: got output "2021-05-24T19:04:35.357223402Z I0524 19:04:35.357079 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/h8w 367\n" STEP: restricting to a time range May 24 19:04:38.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator --since=1s' May 24 19:04:38.129: INFO: stderr: "" May 24 19:04:38.129: INFO: stdout: "I0524 19:04:37.157143 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/c5s2 579\nI0524 19:04:37.357053 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/wxnc 517\nI0524 19:04:37.557122 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/cttm 348\nI0524 19:04:37.757048 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/fk2w 573\nI0524 19:04:37.957133 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/zr52 296\n" May 24 19:04:38.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 logs logs-generator logs-generator --since=24h' May 24 19:04:38.262: INFO: stderr: "" May 24 19:04:38.262: INFO: stdout: "I0524 19:04:31.756891 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/2tgz 232\nI0524 19:04:31.957141 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/d842 372\nI0524 19:04:32.157143 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/v68 393\nI0524 19:04:32.357113 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/g9kd 274\nI0524 19:04:32.556926 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/hg8 307\nI0524 19:04:32.757125 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6fc 383\nI0524 19:04:32.957089 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/vnfl 363\nI0524 19:04:33.157096 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/4prl 578\nI0524 19:04:33.357057 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/7n8 270\nI0524 19:04:33.557092 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/8zdj 223\nI0524 19:04:33.757011 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/gcpw 477\nI0524 19:04:33.957096 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/fw6g 400\nI0524 19:04:34.157026 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/gzs 453\nI0524 19:04:34.357041 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/nn7 580\nI0524 19:04:34.557031 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/s9qk 262\nI0524 19:04:34.757133 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/ph4t 245\nI0524 19:04:34.957087 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/7dsj 341\nI0524 19:04:35.157036 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/gzkh 428\nI0524 19:04:35.357079 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/h8w 367\nI0524 19:04:35.557146 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/jk2 351\nI0524 19:04:35.757155 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/9vw 351\nI0524 19:04:35.957089 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/2dqx 357\nI0524 19:04:36.157093 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/lnh 276\nI0524 19:04:36.357239 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/l42r 237\nI0524 19:04:36.557157 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/9wl9 578\nI0524 19:04:36.757107 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/default/pods/nqc 593\nI0524 19:04:36.957148 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/7dgn 482\nI0524 19:04:37.157143 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/c5s2 579\nI0524 19:04:37.357053 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/wxnc 517\nI0524 19:04:37.557122 1 logs_generator.go:76] 29 GET /api/v1/namespaces/kube-system/pods/cttm 348\nI0524 19:04:37.757048 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/fk2w 573\nI0524 19:04:37.957133 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/zr52 296\nI0524 19:04:38.157053 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/4tfb 329\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 May 24 19:04:38.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-8508 delete pod logs-generator' May 24 19:04:47.903: INFO: stderr: "" May 24 19:04:47.903: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8508" for this suite. • [SLOW TEST:17.207 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":27,"skipped":490,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:28.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4h7bw in namespace proxy-9898 I0524 19:04:28.080210 21 runners.go:190] Created replication controller with name: proxy-service-4h7bw, namespace: proxy-9898, replica count: 1 I0524 19:04:29.130764 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:04:30.131112 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 19:04:31.131465 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 19:04:32.131828 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 19:04:33.132150 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 19:04:34.132459 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0524 19:04:35.132646 21 runners.go:190] proxy-service-4h7bw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:04:35.135: INFO: setup took 7.067692492s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 11.017751ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 11.274834ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 11.168355ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 11.301078ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 11.200987ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 11.158011ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 11.17501ms) May 24 19:04:35.147: INFO: (0) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 11.414599ms) May 24 19:04:35.148: INFO: (0) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 12.79266ms) May 24 19:04:35.148: INFO: (0) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 12.620804ms) May 24 19:04:35.149: INFO: (0) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 13.727802ms) May 24 19:04:35.150: INFO: (0) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 14.327913ms) May 24 19:04:35.150: INFO: (0) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 14.228577ms) May 24 19:04:35.154: INFO: (0) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 18.282951ms) May 24 19:04:35.154: INFO: (0) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 18.401204ms) May 24 19:04:35.155: INFO: (0) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test (200; 5.487444ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 5.455024ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 5.462233ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 5.639586ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 5.611019ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 5.621784ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 5.939262ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 5.772936ms) May 24 19:04:35.161: INFO: (1) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 6.099077ms) May 24 19:04:35.168: INFO: (2) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 6.886322ms) May 24 19:04:35.169: INFO: (2) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 7.838602ms) May 24 19:04:35.169: INFO: (2) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 7.890555ms) May 24 19:04:35.169: INFO: (2) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 7.985911ms) May 24 19:04:35.170: INFO: (2) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 8.160467ms) May 24 19:04:35.170: INFO: (2) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 8.176971ms) May 24 19:04:35.170: INFO: (2) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 8.293255ms) May 24 19:04:35.170: INFO: (2) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test (200; 10.17543ms) May 24 19:04:35.172: INFO: (2) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 10.276824ms) May 24 19:04:35.172: INFO: (2) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 10.153008ms) May 24 19:04:35.172: INFO: (2) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 10.462349ms) May 24 19:04:35.172: INFO: (2) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 10.530263ms) May 24 19:04:35.172: INFO: (2) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 10.957067ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.275761ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.068266ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.250421ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.290723ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.301545ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.379655ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.464387ms) May 24 19:04:35.177: INFO: (3) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.731083ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.876436ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.92725ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 5.040644ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 5.047305ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 5.163167ms) May 24 19:04:35.178: INFO: (3) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 5.230275ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.002338ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.16375ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.228702ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.340355ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.437711ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.506314ms) May 24 19:04:35.182: INFO: (4) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.403963ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.490809ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 4.684668ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.571529ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.512082ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.529679ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.935798ms) May 24 19:04:35.183: INFO: (4) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 5.387535ms) May 24 19:04:35.184: INFO: (4) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 5.497691ms) May 24 19:04:35.187: INFO: (5) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 3.716733ms) May 24 19:04:35.187: INFO: (5) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 3.716255ms) May 24 19:04:35.187: INFO: (5) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 3.688784ms) May 24 19:04:35.187: INFO: (5) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 3.703143ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 3.969658ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 3.939501ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.141539ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.26537ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.552396ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.50085ms) May 24 19:04:35.188: INFO: (5) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.69268ms) May 24 19:04:35.189: INFO: (5) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.875553ms) May 24 19:04:35.189: INFO: (5) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 4.72772ms) May 24 19:04:35.189: INFO: (5) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.828066ms) May 24 19:04:35.189: INFO: (5) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.817707ms) May 24 19:04:35.189: INFO: (5) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test<... (200; 3.192395ms) May 24 19:04:35.192: INFO: (6) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 3.262428ms) May 24 19:04:35.192: INFO: (6) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.214121ms) May 24 19:04:35.192: INFO: (6) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 3.61921ms) May 24 19:04:35.192: INFO: (6) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.735412ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.016778ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.139345ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.069161ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.296611ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.524632ms) May 24 19:04:35.193: INFO: (6) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.730499ms) May 24 19:04:35.194: INFO: (6) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.718817ms) May 24 19:04:35.194: INFO: (6) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 5.049817ms) May 24 19:04:35.194: INFO: (6) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 5.17169ms) May 24 19:04:35.197: INFO: (7) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.265845ms) May 24 19:04:35.198: INFO: (7) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.560955ms) May 24 19:04:35.198: INFO: (7) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 3.551883ms) May 24 19:04:35.198: INFO: (7) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 3.867358ms) May 24 19:04:35.198: INFO: (7) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.075043ms) May 24 19:04:35.198: INFO: (7) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.570414ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.637902ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.55407ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.557887ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.59733ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.78161ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.937851ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 5.130365ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 5.146262ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 5.233728ms) May 24 19:04:35.199: INFO: (7) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test (200; 3.219742ms) May 24 19:04:35.203: INFO: (8) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.326143ms) May 24 19:04:35.203: INFO: (8) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 3.360004ms) May 24 19:04:35.203: INFO: (8) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.843064ms) May 24 19:04:35.203: INFO: (8) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 3.986945ms) May 24 19:04:35.203: INFO: (8) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 3.99819ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.07761ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.131789ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.157775ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.181687ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.22305ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.376606ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.605811ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.630708ms) May 24 19:04:35.204: INFO: (8) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 4.822004ms) May 24 19:04:35.207: INFO: (9) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.228505ms) May 24 19:04:35.208: INFO: (9) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 3.797365ms) May 24 19:04:35.208: INFO: (9) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 3.94092ms) May 24 19:04:35.208: INFO: (9) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.111154ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.290281ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.803395ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.717206ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.839725ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 4.679589ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.686758ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 4.79166ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.733162ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.791647ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.836463ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.961592ms) May 24 19:04:35.209: INFO: (9) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 3.662922ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.355114ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.473479ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.401801ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.411305ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.357226ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.428344ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test (200; 4.904459ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.869559ms) May 24 19:04:35.214: INFO: (10) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 4.823552ms) May 24 19:04:35.218: INFO: (11) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 3.178654ms) May 24 19:04:35.218: INFO: (11) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 3.709076ms) May 24 19:04:35.218: INFO: (11) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 3.683146ms) May 24 19:04:35.218: INFO: (11) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.823325ms) May 24 19:04:35.218: INFO: (11) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 4.070855ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.100778ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.66843ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.86917ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.834195ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.995836ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.852257ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.974467ms) May 24 19:04:35.219: INFO: (11) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.877997ms) May 24 19:04:35.220: INFO: (11) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 5.213819ms) May 24 19:04:35.223: INFO: (12) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 3.126716ms) May 24 19:04:35.223: INFO: (12) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 3.276116ms) May 24 19:04:35.223: INFO: (12) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 3.505374ms) May 24 19:04:35.223: INFO: (12) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test<... (200; 4.497659ms) May 24 19:04:35.224: INFO: (12) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.590388ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 3.720124ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 3.737547ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 3.804744ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 3.854674ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 3.86239ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 3.941613ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 4.054312ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.041607ms) May 24 19:04:35.228: INFO: (13) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 4.129377ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.364604ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.355467ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.373823ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.339088ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.329926ms) May 24 19:04:35.229: INFO: (13) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.44364ms) May 24 19:04:35.232: INFO: (14) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 2.876853ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 3.60491ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 3.647167ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 3.607927ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 3.81919ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.144429ms) May 24 19:04:35.233: INFO: (14) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.615563ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.5708ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.64833ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.779789ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.913524ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 5.074136ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 5.114981ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 5.135132ms) May 24 19:04:35.234: INFO: (14) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 5.09578ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 3.564856ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 3.694167ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 3.816575ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 3.74448ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 3.780846ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.166778ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.172672ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.234738ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.146069ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 4.283755ms) May 24 19:04:35.239: INFO: (15) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.254571ms) May 24 19:04:35.238: INFO: (15) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test<... (200; 4.452841ms) May 24 19:04:35.241: INFO: (16) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 2.202628ms) May 24 19:04:35.241: INFO: (16) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 3.638611ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 3.883336ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.19124ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.223387ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.19702ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.301438ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 4.289838ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.222467ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 4.29659ms) May 24 19:04:35.243: INFO: (16) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 4.285062ms) May 24 19:04:35.246: INFO: (17) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 2.42924ms) May 24 19:04:35.248: INFO: (17) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.331956ms) May 24 19:04:35.248: INFO: (17) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 4.563386ms) May 24 19:04:35.248: INFO: (17) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:462/proxy/: tls qux (200; 5.049471ms) May 24 19:04:35.248: INFO: (17) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 5.202436ms) May 24 19:04:35.249: INFO: (17) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:460/proxy/: tls baz (200; 6.089501ms) May 24 19:04:35.250: INFO: (17) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 6.226988ms) May 24 19:04:35.250: INFO: (17) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: test (200; 10.15544ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 3.995892ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 3.984274ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.010647ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname2/proxy/: bar (200; 3.928484ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname2/proxy/: bar (200; 4.095608ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.010624ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.097371ms) May 24 19:04:35.257: INFO: (18) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.027655ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.418208ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:1080/proxy/: ... (200; 4.418775ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.485052ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 4.414822ms) May 24 19:04:35.258: INFO: (18) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: ... (200; 2.915659ms) May 24 19:04:35.261: INFO: (19) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f:1080/proxy/: test<... (200; 3.067164ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/services/proxy-service-4h7bw:portname1/proxy/: foo (200; 4.4982ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/services/http:proxy-service-4h7bw:portname1/proxy/: foo (200; 4.454928ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/pods/proxy-service-4h7bw-84r8f/proxy/: test (200; 4.562702ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:162/proxy/: bar (200; 4.612305ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname2/proxy/: tls qux (200; 4.532565ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/services/https:proxy-service-4h7bw:tlsportname1/proxy/: tls baz (200; 4.568183ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/pods/http:proxy-service-4h7bw-84r8f:160/proxy/: foo (200; 4.66157ms) May 24 19:04:35.263: INFO: (19) /api/v1/namespaces/proxy-9898/pods/https:proxy-service-4h7bw-84r8f:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:47.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1963" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":36,"skipped":658,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:47.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 24 19:04:48.036: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7196 3a6a7871-8cd4-4c7a-860b-7e30c513a038 827913 0 2021-05-24 19:04:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-24 19:04:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:04:48.036: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7196 3a6a7871-8cd4-4c7a-860b-7e30c513a038 827914 0 2021-05-24 19:04:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-24 19:04:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 24 19:04:48.050: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7196 3a6a7871-8cd4-4c7a-860b-7e30c513a038 827915 0 2021-05-24 19:04:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-24 19:04:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 19:04:48.050: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7196 3a6a7871-8cd4-4c7a-860b-7e30c513a038 827916 0 2021-05-24 19:04:48 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-05-24 19:04:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:48.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7196" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":37,"skipped":663,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:46.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:04:46.554: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6f5f18e6-e6cc-4401-81a9-e10f4cd7d4d1" in namespace "security-context-test-2835" to be "Succeeded or Failed" May 24 19:04:46.557: INFO: Pod "alpine-nnp-false-6f5f18e6-e6cc-4401-81a9-e10f4cd7d4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310549ms May 24 19:04:48.622: INFO: Pod "alpine-nnp-false-6f5f18e6-e6cc-4401-81a9-e10f4cd7d4d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06825695s May 24 19:04:48.622: INFO: Pod "alpine-nnp-false-6f5f18e6-e6cc-4401-81a9-e10f4cd7d4d1" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2835" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:46.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-603/configmap-test-25c74a01-3fdd-4ce8-9eb9-74d05a5da39e STEP: Creating a pod to test consume configMaps May 24 19:04:46.493: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9" in namespace "configmap-603" to be "Succeeded or Failed" May 24 19:04:46.495: INFO: Pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197084ms May 24 19:04:48.498: INFO: Pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005271212s May 24 19:04:50.722: INFO: Pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229232761s May 24 19:04:52.731: INFO: Pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238181772s STEP: Saw pod success May 24 19:04:52.731: INFO: Pod "pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9" satisfied condition "Succeeded or Failed" May 24 19:04:52.835: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9 container env-test: STEP: delete the pod May 24 19:04:53.437: INFO: Waiting for pod pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9 to disappear May 24 19:04:53.632: INFO: Pod pod-configmaps-ff0cd4f5-72e0-4c66-a170-6024b72911a9 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:53.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-603" for this suite. • [SLOW TEST:7.192 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":478,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:00:50.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-09703fcf-377e-4e02-a9a9-ea4bcfe03291 in namespace container-probe-4863 May 24 19:00:52.766: INFO: Started pod liveness-09703fcf-377e-4e02-a9a9-ea4bcfe03291 in namespace container-probe-4863 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:00:52.769: INFO: Initial restart count of pod liveness-09703fcf-377e-4e02-a9a9-ea4bcfe03291 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4863" for this suite. • [SLOW TEST:243.512 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:48.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:04:48.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 19:04:51.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:04:53.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 19:04:55.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:04:58.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:04:58.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2178" for this suite. STEP: Destroying namespace "webhook-2178-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.377 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":38,"skipped":671,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:58.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0524 19:03:59.262571 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:05:01.285: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:01.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7937" for this suite. • [SLOW TEST:63.111 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:53.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override arguments May 24 19:04:53.730: INFO: Waiting up to 5m0s for pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1" in namespace "containers-9857" to be "Succeeded or Failed" May 24 19:04:53.736: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.337116ms May 24 19:04:55.740: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009722412s May 24 19:04:57.744: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014075007s May 24 19:04:59.749: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018728138s May 24 19:05:01.752: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022250463s STEP: Saw pod success May 24 19:05:01.753: INFO: Pod "client-containers-12f63acb-d086-4d81-8267-431c765457b1" satisfied condition "Succeeded or Failed" May 24 19:05:01.756: INFO: Trying to get logs from node leguer-worker pod client-containers-12f63acb-d086-4d81-8267-431c765457b1 container agnhost-container: STEP: delete the pod May 24 19:05:01.771: INFO: Waiting for pod client-containers-12f63acb-d086-4d81-8267-431c765457b1 to disappear May 24 19:05:01.774: INFO: Pod client-containers-12f63acb-d086-4d81-8267-431c765457b1 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:01.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9857" for this suite. • [SLOW TEST:8.113 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":491,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:33.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4097 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4097 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4097 May 24 19:03:33.832: INFO: Found 0 stateful pods, waiting for 1 May 24 19:03:43.838: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 24 19:03:43.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:03:44.082: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:03:44.082: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:03:44.082: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:03:44.085: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 19:03:54.089: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 19:03:54.090: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:03:54.107: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999515s May 24 19:03:55.111: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996540089s May 24 19:03:56.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992436929s May 24 19:03:57.119: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98835333s May 24 19:03:58.124: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.984429416s May 24 19:03:59.127: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.980341547s May 24 19:04:00.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.977386393s May 24 19:04:01.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.972612825s May 24 19:04:02.140: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.968377933s May 24 19:04:03.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 964.067492ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4097 May 24 19:04:04.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:04:04.386: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:04:04.386: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:04:04.386: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:04:04.390: INFO: Found 1 stateful pods, waiting for 3 May 24 19:04:14.395: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:14.395: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:14.395: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 24 19:04:14.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:04:14.660: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:04:14.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:04:14.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:04:14.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:04:14.870: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:04:14.870: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:04:14.870: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:04:14.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:04:15.106: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:04:15.106: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:04:15.106: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:04:15.106: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:04:15.110: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 24 19:04:25.117: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 19:04:25.117: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 19:04:25.117: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 19:04:25.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999539s May 24 19:04:26.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9961813s May 24 19:04:27.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991566006s May 24 19:04:28.144: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986806947s May 24 19:04:29.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981853596s May 24 19:04:30.157: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97670664s May 24 19:04:31.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969575094s May 24 19:04:32.166: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964635563s May 24 19:04:33.171: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959996685s May 24 19:04:34.175: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.507132ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4097 May 24 19:04:35.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:04:35.404: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:04:35.404: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:04:35.404: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:04:35.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:04:35.637: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:04:35.637: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:04:35.637: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:04:35.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-4097 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:04:35.883: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:04:35.883: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:04:35.883: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:04:35.883: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:05:05.911: INFO: Deleting all statefulset in ns statefulset-4097 May 24 19:05:05.913: INFO: Scaling statefulset ss to 0 May 24 19:05:05.923: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:05:05.926: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:05.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4097" for this suite. • [SLOW TEST:92.167 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":14,"skipped":271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:01:03.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod test-webserver-4564936f-f53a-4506-ac85-130f7c3cdc6f in namespace container-probe-8822 May 24 19:01:05.914: INFO: Started pod test-webserver-4564936f-f53a-4506-ac85-130f7c3cdc6f in namespace container-probe-8822 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:01:05.917: INFO: Initial restart count of pod test-webserver-4564936f-f53a-4506-ac85-130f7c3cdc6f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:07.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8822" for this suite. • [SLOW TEST:243.289 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":321,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:58.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 24 19:05:05.020: INFO: Successfully updated pod "adopt-release-4lzbr" STEP: Checking that the Job readopts the Pod May 24 19:05:05.020: INFO: Waiting up to 15m0s for pod "adopt-release-4lzbr" in namespace "job-9312" to be "adopted" May 24 19:05:05.023: INFO: Pod "adopt-release-4lzbr": Phase="Running", Reason="", readiness=true. Elapsed: 2.565123ms May 24 19:05:07.026: INFO: Pod "adopt-release-4lzbr": Phase="Running", Reason="", readiness=true. Elapsed: 2.005755386s May 24 19:05:07.026: INFO: Pod "adopt-release-4lzbr" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 24 19:05:07.536: INFO: Successfully updated pod "adopt-release-4lzbr" STEP: Checking that the Job releases the Pod May 24 19:05:07.536: INFO: Waiting up to 15m0s for pod "adopt-release-4lzbr" in namespace "job-9312" to be "released" May 24 19:05:07.539: INFO: Pod "adopt-release-4lzbr": Phase="Running", Reason="", readiness=true. Elapsed: 2.520198ms May 24 19:05:09.542: INFO: Pod "adopt-release-4lzbr": Phase="Running", Reason="", readiness=true. Elapsed: 2.006188077s May 24 19:05:09.542: INFO: Pod "adopt-release-4lzbr" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:09.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9312" for this suite. • [SLOW TEST:11.088 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":39,"skipped":677,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:07.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 19:05:07.574: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 19:05:09.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479907, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479907, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479907, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479907, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:05:12.595: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:12.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:13.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1033" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.677 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":15,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:06.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:06.084: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 19:05:10.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4315 --namespace=crd-publish-openapi-4315 create -f -' May 24 19:05:10.485: INFO: stderr: "" May 24 19:05:10.485: INFO: stdout: "e2e-test-crd-publish-openapi-5985-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 19:05:10.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4315 --namespace=crd-publish-openapi-4315 delete e2e-test-crd-publish-openapi-5985-crds test-cr' May 24 19:05:10.614: INFO: stderr: "" May 24 19:05:10.615: INFO: stdout: "e2e-test-crd-publish-openapi-5985-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 24 19:05:10.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4315 --namespace=crd-publish-openapi-4315 apply -f -' May 24 19:05:10.887: INFO: stderr: "" May 24 19:05:10.887: INFO: stdout: "e2e-test-crd-publish-openapi-5985-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 19:05:10.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4315 --namespace=crd-publish-openapi-4315 delete e2e-test-crd-publish-openapi-5985-crds test-cr' May 24 19:05:11.007: INFO: stderr: "" May 24 19:05:11.007: INFO: stdout: "e2e-test-crd-publish-openapi-5985-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 24 19:05:11.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4315 explain e2e-test-crd-publish-openapi-5985-crds' May 24 19:05:11.263: INFO: stderr: "" May 24 19:05:11.263: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5985-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:15.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4315" for this suite. • [SLOW TEST:9.290 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":15,"skipped":331,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:09.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:09.614: INFO: Creating ReplicaSet my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c May 24 19:05:09.622: INFO: Pod name my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c: Found 0 pods out of 1 May 24 19:05:14.626: INFO: Pod name my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c: Found 1 pods out of 1 May 24 19:05:14.626: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c" is running May 24 19:05:14.628: INFO: Pod "my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c-s7cz2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:05:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:05:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:05:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-05-24 19:05:09 +0000 UTC Reason: Message:}]) May 24 19:05:14.629: INFO: Trying to dial the pod May 24 19:05:19.640: INFO: Controller my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c: Got expected result from replica 1 [my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c-s7cz2]: "my-hostname-basic-7e934021-cc17-46c3-bd72-efa102f0840c-s7cz2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:19.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8265" for this suite. • [SLOW TEST:10.064 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":40,"skipped":695,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:13.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-58448f16-8f63-421c-a762-e77b772e42b4 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-58448f16-8f63-421c-a762-e77b772e42b4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:20.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6491" for this suite. • [SLOW TEST:6.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:15.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5334 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5334 I0524 19:05:15.398567 27 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5334, replica count: 2 I0524 19:05:18.449319 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:05:18.449: INFO: Creating new exec pod May 24 19:05:21.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-5334 exec execpod7h6hq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 24 19:05:21.688: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 24 19:05:21.688: INFO: stdout: "" May 24 19:05:21.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-5334 exec execpod7h6hq -- /bin/sh -x -c nc -zv -t -w 2 10.96.253.10 80' May 24 19:05:21.919: INFO: stderr: "+ nc -zv -t -w 2 10.96.253.10 80\nConnection to 10.96.253.10 80 port [tcp/http] succeeded!\n" May 24 19:05:21.919: INFO: stdout: "" May 24 19:05:21.919: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:21.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5334" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:6.591 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":16,"skipped":336,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:20.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:20.122: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:22.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8814" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:21.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:21.993: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:23.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5553" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":17,"skipped":348,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:01.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD May 24 19:05:01.824: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:29.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7677" for this suite. • [SLOW TEST:27.735 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":32,"skipped":492,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:29.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Request ServerVersion STEP: Confirm major version May 24 19:05:29.566: INFO: Major version: 1 STEP: Confirm minor version May 24 19:05:29.566: INFO: cleanMinorVersion: 20 May 24 19:05:29.567: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:29.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5095" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":33,"skipped":497,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:29.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:29.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4581" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":34,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:48.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod May 24 19:04:48.748: INFO: PodSpec: initContainers in spec.initContainers May 24 19:05:35.766: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3a3b5afc-dde0-4716-82fc-9fe2e0c52f03", GenerateName:"", Namespace:"init-container-50", SelfLink:"", UID:"7c55af29-5979-42fe-b370-f4884fc791a7", ResourceVersion:"829245", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"748272397"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.173\"\n ],\n \"mac\": \"82:a7:46:28:03:89\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.1.173\"\n ],\n \"mac\": \"82:a7:46:28:03:89\",\n \"default\": true,\n \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040a5100), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040a5120)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040a5140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040a5160)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0040a5180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0040a51a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gxmx5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc007c777c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxmx5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxmx5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gxmx5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00576e168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a4f810), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00576e210)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00576e250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00576e258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00576e25c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004b92db0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479888, loc:(*time.Location)(0x7975ee0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.1.173", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.173"}}, StartTime:(*v1.Time)(0xc0040a51c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a4f8f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a4f960)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://62063f3aeb555c9b423af588627fa50fe2ad1580316437a0cb53ed8e4425aedc", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0040a5200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0040a51e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00576e30f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:35.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-50" for this suite. • [SLOW TEST:47.083 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":32,"skipped":575,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:23.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 19:05:23.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-127 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' May 24 19:05:23.187: INFO: stderr: "" May 24 19:05:23.187: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 24 19:05:28.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-127 get pod e2e-test-httpd-pod -o json' May 24 19:05:28.349: INFO: stderr: "" May 24 19:05:28.349: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.185\\\"\\n ],\\n \\\"mac\\\": \\\"2a:a7:ef:4c:33:79\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.1.185\\\"\\n ],\\n \\\"mac\\\": \\\"2a:a7:ef:4c:33:79\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"creationTimestamp\": \"2021-05-24T19:05:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-24T19:05:23Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:k8s.v1.cni.cncf.io/network-status\": {},\n \"f:k8s.v1.cni.cncf.io/networks-status\": {}\n }\n }\n },\n \"manager\": \"multus\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-24T19:05:23Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.185\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-24T19:05:24Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-127\",\n \"resourceVersion\": \"829039\",\n \"uid\": \"e80aeebb-330b-48e5-aee4-0476e7f928f6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-qwfvr\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-qwfvr\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-qwfvr\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T19:05:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T19:05:24Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T19:05:24Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-24T19:05:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://08904faa8f2a88aa6a3142fd7b4e57fb00e78cb4c07bf453dee35dba5df79d22\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-24T19:05:24Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.185\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.185\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-05-24T19:05:23Z\"\n }\n}\n" STEP: replace the image in the pod May 24 19:05:28.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-127 replace -f -' May 24 19:05:28.692: INFO: stderr: "" May 24 19:05:28.692: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 May 24 19:05:28.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-127 delete pods e2e-test-httpd-pod' May 24 19:05:37.898: INFO: stderr: "" May 24 19:05:37.898: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:37.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-127" for this suite. • [SLOW TEST:14.875 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":18,"skipped":356,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:22.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-2228 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 19:05:22.347: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 19:05:22.368: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 19:05:24.373: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:26.372: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:28.372: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:30.372: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:32.373: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:34.373: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:36.524: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 19:05:38.372: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 19:05:38.377: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 19:05:42.404: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 24 19:05:42.404: INFO: Going to poll 10.244.1.184 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 24 19:05:42.407: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.184:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2228 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:05:42.407: INFO: >>> kubeConfig: /root/.kube/config May 24 19:05:42.535: INFO: Found all 1 expected endpoints: [netserver-0] May 24 19:05:42.535: INFO: Going to poll 10.244.2.82 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 24 19:05:42.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.82:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2228 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:05:42.538: INFO: >>> kubeConfig: /root/.kube/config May 24 19:05:42.664: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:42.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2228" for this suite. • [SLOW TEST:20.435 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":466,"failed":0} [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:42.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:42.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5809" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":19,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:42.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap that has name configmap-test-emptyKey-680f9193-bfc6-44b8-bb6b-654867ba984f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:42.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5411" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":20,"skipped":486,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:37.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:05:38.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 19:05:40.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479938, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479938, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479938, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479938, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:05:43.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:43.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8915" for this suite. STEP: Destroying namespace "webhook-8915-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.693 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":19,"skipped":360,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:35.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 24 19:05:35.825: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the sample API server. May 24 19:05:36.211: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 24 19:05:43.461: INFO: Waited 5.205553489s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:44.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4788" for this suite. • [SLOW TEST:8.612 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":33,"skipped":589,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:42.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:05:42.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d" in namespace "downward-api-6676" to be "Succeeded or Failed" May 24 19:05:42.862: INFO: Pod "downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604646ms May 24 19:05:44.865: INFO: Pod "downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005969107s May 24 19:05:46.869: INFO: Pod "downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00966603s STEP: Saw pod success May 24 19:05:46.869: INFO: Pod "downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d" satisfied condition "Succeeded or Failed" May 24 19:05:46.872: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d container client-container: STEP: delete the pod May 24 19:05:46.887: INFO: Waiting for pod downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d to disappear May 24 19:05:46.890: INFO: Pod downwardapi-volume-0ccb9f7f-74dd-4270-9b31-e638cfe6863d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:46.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6676" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":492,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:54.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-722 May 24 19:05:02.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 24 19:05:02.983: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 24 19:05:02.983: INFO: stdout: "iptables" May 24 19:05:02.983: INFO: proxyMode: iptables May 24 19:05:02.991: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 19:05:02.994: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-722 STEP: creating replication controller affinity-clusterip-timeout in namespace services-722 I0524 19:05:03.008299 28 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-722, replica count: 3 I0524 19:05:06.058723 28 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 19:05:09.059073 28 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 19:05:09.065: INFO: Creating new exec pod May 24 19:05:12.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec execpod-affinitymvs2b -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 24 19:05:12.319: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 24 19:05:12.319: INFO: stdout: "" May 24 19:05:12.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec execpod-affinitymvs2b -- /bin/sh -x -c nc -zv -t -w 2 10.96.183.231 80' May 24 19:05:12.545: INFO: stderr: "+ nc -zv -t -w 2 10.96.183.231 80\nConnection to 10.96.183.231 80 port [tcp/http] succeeded!\n" May 24 19:05:12.545: INFO: stdout: "" May 24 19:05:12.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec execpod-affinitymvs2b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.183.231:80/ ; done' May 24 19:05:12.882: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n" May 24 19:05:12.882: INFO: stdout: "\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4\naffinity-clusterip-timeout-p2jd4" May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Received response from host: affinity-clusterip-timeout-p2jd4 May 24 19:05:12.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec execpod-affinitymvs2b -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.183.231:80/' May 24 19:05:13.134: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n" May 24 19:05:13.134: INFO: stdout: "affinity-clusterip-timeout-p2jd4" May 24 19:05:33.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=services-722 exec execpod-affinitymvs2b -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.183.231:80/' May 24 19:05:33.416: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.96.183.231:80/\n" May 24 19:05:33.416: INFO: stdout: "affinity-clusterip-timeout-9klln" May 24 19:05:33.416: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-722, will wait for the garbage collector to delete the pods May 24 19:05:33.490: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.477248ms May 24 19:05:33.590: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.322696ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:50.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-722" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:56.110 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":206,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:43.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:51.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8051" for this suite. • [SLOW TEST:8.046 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":20,"skipped":375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:29.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-projected-t7h8 STEP: Creating a pod to test atomic-volume-subpath May 24 19:05:29.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-t7h8" in namespace "subpath-1942" to be "Succeeded or Failed" May 24 19:05:29.771: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.111406ms May 24 19:05:31.775: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007373986s May 24 19:05:33.779: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 4.01123212s May 24 19:05:35.782: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 6.014251033s May 24 19:05:37.785: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 8.017670029s May 24 19:05:39.788: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 10.02095152s May 24 19:05:41.798: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 12.030799951s May 24 19:05:43.802: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 14.034183144s May 24 19:05:45.822: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 16.054321524s May 24 19:05:47.824: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 18.057006462s May 24 19:05:49.829: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Running", Reason="", readiness=true. Elapsed: 20.061392564s May 24 19:05:51.833: INFO: Pod "pod-subpath-test-projected-t7h8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.065290236s STEP: Saw pod success May 24 19:05:51.833: INFO: Pod "pod-subpath-test-projected-t7h8" satisfied condition "Succeeded or Failed" May 24 19:05:51.840: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-projected-t7h8 container test-container-subpath-projected-t7h8: STEP: delete the pod May 24 19:05:51.857: INFO: Waiting for pod pod-subpath-test-projected-t7h8 to disappear May 24 19:05:51.860: INFO: Pod pod-subpath-test-projected-t7h8 no longer exists STEP: Deleting pod pod-subpath-test-projected-t7h8 May 24 19:05:51.860: INFO: Deleting pod "pod-subpath-test-projected-t7h8" in namespace "subpath-1942" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:51.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1942" for this suite. • [SLOW TEST:22.150 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":545,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:46.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in volume subpath May 24 19:05:46.945: INFO: Waiting up to 5m0s for pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a" in namespace "var-expansion-8359" to be "Succeeded or Failed" May 24 19:05:46.948: INFO: Pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431603ms May 24 19:05:48.952: INFO: Pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a": Phase="Running", Reason="", readiness=true. Elapsed: 2.00697617s May 24 19:05:50.957: INFO: Pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a": Phase="Running", Reason="", readiness=true. Elapsed: 4.011308781s May 24 19:05:52.960: INFO: Pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014505373s STEP: Saw pod success May 24 19:05:52.960: INFO: Pod "var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a" satisfied condition "Succeeded or Failed" May 24 19:05:52.963: INFO: Trying to get logs from node leguer-worker pod var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a container dapi-container: STEP: delete the pod May 24 19:05:52.976: INFO: Waiting for pod var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a to disappear May 24 19:05:52.978: INFO: Pod var-expansion-56476b29-cdbb-4ec9-890a-ec51c605198a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:52.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8359" for this suite. • [SLOW TEST:6.081 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":22,"skipped":495,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:50.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-a9c87690-f946-4d78-bd13-ae7d231b0f03 STEP: Creating a pod to test consume configMaps May 24 19:05:50.456: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8" in namespace "projected-9884" to be "Succeeded or Failed" May 24 19:05:50.458: INFO: Pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473954ms May 24 19:05:52.462: INFO: Pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005762475s May 24 19:05:54.465: INFO: Pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009522405s May 24 19:05:56.469: INFO: Pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013295934s STEP: Saw pod success May 24 19:05:56.469: INFO: Pod "pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8" satisfied condition "Succeeded or Failed" May 24 19:05:56.473: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8 container agnhost-container: STEP: delete the pod May 24 19:05:56.489: INFO: Waiting for pod pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8 to disappear May 24 19:05:56.492: INFO: Pod pod-projected-configmaps-ad8e803f-b836-40c7-8330-93ce2aa755d8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:05:56.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9884" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":207,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:56.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on tmpfs May 24 19:05:56.568: INFO: Waiting up to 5m0s for pod "pod-3e6c5ec7-52e8-403c-93be-4d830c869c15" in namespace "emptydir-6279" to be "Succeeded or Failed" May 24 19:05:56.571: INFO: Pod "pod-3e6c5ec7-52e8-403c-93be-4d830c869c15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924757ms May 24 19:05:58.575: INFO: Pod "pod-3e6c5ec7-52e8-403c-93be-4d830c869c15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006600213s May 24 19:06:00.579: INFO: Pod "pod-3e6c5ec7-52e8-403c-93be-4d830c869c15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010263096s STEP: Saw pod success May 24 19:06:00.579: INFO: Pod "pod-3e6c5ec7-52e8-403c-93be-4d830c869c15" satisfied condition "Succeeded or Failed" May 24 19:06:00.582: INFO: Trying to get logs from node leguer-worker pod pod-3e6c5ec7-52e8-403c-93be-4d830c869c15 container test-container: STEP: delete the pod May 24 19:06:00.596: INFO: Waiting for pod pod-3e6c5ec7-52e8-403c-93be-4d830c869c15 to disappear May 24 19:06:00.599: INFO: Pod pod-3e6c5ec7-52e8-403c-93be-4d830c869c15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:00.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6279" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":220,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:47.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0524 19:04:58.662100 24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:06:00.681: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 24 19:06:00.681: INFO: Deleting pod "simpletest-rc-to-be-deleted-7jwct" in namespace "gc-2293" May 24 19:06:00.689: INFO: Deleting pod "simpletest-rc-to-be-deleted-9htsb" in namespace "gc-2293" May 24 19:06:00.696: INFO: Deleting pod "simpletest-rc-to-be-deleted-9l788" in namespace "gc-2293" May 24 19:06:00.703: INFO: Deleting pod "simpletest-rc-to-be-deleted-jmh4d" in namespace "gc-2293" May 24 19:06:00.710: INFO: Deleting pod "simpletest-rc-to-be-deleted-k2lhx" in namespace "gc-2293" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:00.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2293" for this suite. • [SLOW TEST:72.785 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":28,"skipped":506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":23,"skipped":507,"failed":0} [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:01.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:01.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1111" for this suite. • [SLOW TEST:60.064 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":507,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:00.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 24 19:06:01.567: INFO: starting watch STEP: patching STEP: updating May 24 19:06:01.575: INFO: waiting for watch events with expected annotations May 24 19:06:01.575: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:01.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-2095" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":29,"skipped":538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:52.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:05:53.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74" in namespace "downward-api-4517" to be "Succeeded or Failed" May 24 19:05:53.032: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757798ms May 24 19:05:55.036: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006232726s May 24 19:05:57.039: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009650429s May 24 19:05:59.043: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013756023s May 24 19:06:01.050: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020719726s May 24 19:06:03.055: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025263279s STEP: Saw pod success May 24 19:06:03.055: INFO: Pod "downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74" satisfied condition "Succeeded or Failed" May 24 19:06:03.058: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74 container client-container: STEP: delete the pod May 24 19:06:03.080: INFO: Waiting for pod downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74 to disappear May 24 19:06:03.083: INFO: Pod downwardapi-volume-8422d6c3-a029-44a9-8020-96668ff6ca74 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:03.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4517" for this suite. • [SLOW TEST:10.097 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:01.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 19:06:05.817: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:05.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-436" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:00.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 19:06:00.664: INFO: Waiting up to 5m0s for pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71" in namespace "emptydir-865" to be "Succeeded or Failed" May 24 19:06:00.667: INFO: Pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080603ms May 24 19:06:02.671: INFO: Pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007092711s May 24 19:06:04.675: INFO: Pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011033111s May 24 19:06:06.678: INFO: Pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014101437s STEP: Saw pod success May 24 19:06:06.678: INFO: Pod "pod-f3175723-30df-4ad0-8ce0-3fe331057f71" satisfied condition "Succeeded or Failed" May 24 19:06:06.680: INFO: Trying to get logs from node leguer-worker pod pod-f3175723-30df-4ad0-8ce0-3fe331057f71 container test-container: STEP: delete the pod May 24 19:06:06.699: INFO: Waiting for pod pod-f3175723-30df-4ad0-8ce0-3fe331057f71 to disappear May 24 19:06:06.701: INFO: Pod pod-f3175723-30df-4ad0-8ce0-3fe331057f71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:06.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-865" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:01.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's command May 24 19:06:01.430: INFO: Waiting up to 5m0s for pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284" in namespace "var-expansion-1952" to be "Succeeded or Failed" May 24 19:06:01.433: INFO: Pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284": Phase="Pending", Reason="", readiness=false. Elapsed: 3.084795ms May 24 19:06:03.437: INFO: Pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006947103s May 24 19:06:05.440: INFO: Pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010442784s May 24 19:06:07.445: INFO: Pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01520419s STEP: Saw pod success May 24 19:06:07.445: INFO: Pod "var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284" satisfied condition "Succeeded or Failed" May 24 19:06:07.448: INFO: Trying to get logs from node leguer-worker pod var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284 container dapi-container: STEP: delete the pod May 24 19:06:07.467: INFO: Waiting for pod var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284 to disappear May 24 19:06:07.470: INFO: Pod var-expansion-f1dac3aa-4826-4a15-b07d-b51e93a96284 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:07.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1952" for this suite. • [SLOW TEST:6.089 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:03.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:06:03.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 19:06:05.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479963, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479963, loc:(*time.Location)(0x7975ee0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479964, loc:(*time.Location)(0x7975ee0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757479963, loc:(*time.Location)(0x7975ee0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:06:09.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:09.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9961" for this suite. STEP: Destroying namespace "webhook-9961-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.938 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":24,"skipped":539,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:09.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 19:06:09.782: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 19:06:12.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:13.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7559" for this suite. STEP: Destroying namespace "webhook-7559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":25,"skipped":555,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:51.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:51.779: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Pending, waiting for it to be Running (with Ready = true) May 24 19:05:53.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Pending, waiting for it to be Running (with Ready = true) May 24 19:05:55.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Pending, waiting for it to be Running (with Ready = true) May 24 19:05:57.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Pending, waiting for it to be Running (with Ready = true) May 24 19:05:59.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Pending, waiting for it to be Running (with Ready = true) May 24 19:06:01.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:03.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:05.782: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:07.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:09.782: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:11.783: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:13.784: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:15.784: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = false) May 24 19:06:17.785: INFO: The status of Pod test-webserver-5cbec17b-fcb0-4d69-a36f-adb1d274b923 is Running (Ready = true) May 24 19:06:17.788: INFO: Container started at 2021-05-24 19:05:53 +0000 UTC, pod became ready at 2021-05-24 19:06:16 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:17.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6555" for this suite. • [SLOW TEST:26.064 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:17.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:17.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5716" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":22,"skipped":434,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:17.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars May 24 19:06:17.981: INFO: Waiting up to 5m0s for pod "downward-api-e5d85b45-9ca8-4930-b213-23527af2e998" in namespace "downward-api-7394" to be "Succeeded or Failed" May 24 19:06:17.984: INFO: Pod "downward-api-e5d85b45-9ca8-4930-b213-23527af2e998": Phase="Pending", Reason="", readiness=false. Elapsed: 3.019385ms May 24 19:06:19.988: INFO: Pod "downward-api-e5d85b45-9ca8-4930-b213-23527af2e998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006784176s STEP: Saw pod success May 24 19:06:19.988: INFO: Pod "downward-api-e5d85b45-9ca8-4930-b213-23527af2e998" satisfied condition "Succeeded or Failed" May 24 19:06:19.991: INFO: Trying to get logs from node leguer-worker2 pod downward-api-e5d85b45-9ca8-4930-b213-23527af2e998 container dapi-container: STEP: delete the pod May 24 19:06:20.007: INFO: Waiting for pod downward-api-e5d85b45-9ca8-4930-b213-23527af2e998 to disappear May 24 19:06:20.010: INFO: Pod downward-api-e5d85b45-9ca8-4930-b213-23527af2e998 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:20.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7394" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:13.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:06:18.021: INFO: Waiting up to 5m0s for pod "client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847" in namespace "pods-2573" to be "Succeeded or Failed" May 24 19:06:18.024: INFO: Pod "client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847": Phase="Pending", Reason="", readiness=false. Elapsed: 3.377857ms May 24 19:06:20.028: INFO: Pod "client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007510723s STEP: Saw pod success May 24 19:06:20.029: INFO: Pod "client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847" satisfied condition "Succeeded or Failed" May 24 19:06:20.032: INFO: Trying to get logs from node leguer-worker2 pod client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847 container env3cont: STEP: delete the pod May 24 19:06:20.049: INFO: Waiting for pod client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847 to disappear May 24 19:06:20.052: INFO: Pod client-envvars-d0a90ce8-926f-427e-92f8-71d52f846847 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:20.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2573" for this suite. • [SLOW TEST:6.102 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ S ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":560,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:20.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:20.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4894" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":27,"skipped":571,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:19.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:05:19.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 24 19:05:20.278: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:20Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:20Z]] name:name1 resourceVersion:828945 uid:5327aafb-4b0c-498a-a0ca-671b25095816] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 24 19:05:30.286: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:30Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:30Z]] name:name2 resourceVersion:829176 uid:d3ab3677-0ff7-4e23-a11c-1ba44c5227a3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 24 19:05:40.294: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:40Z]] name:name1 resourceVersion:829390 uid:5327aafb-4b0c-498a-a0ca-671b25095816] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 24 19:05:50.302: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:50Z]] name:name2 resourceVersion:829744 uid:d3ab3677-0ff7-4e23-a11c-1ba44c5227a3] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 24 19:06:00.312: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:40Z]] name:name1 resourceVersion:830132 uid:5327aafb-4b0c-498a-a0ca-671b25095816] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 24 19:06:10.321: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-05-24T19:05:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-05-24T19:05:50Z]] name:name2 resourceVersion:830560 uid:d3ab3677-0ff7-4e23-a11c-1ba44c5227a3] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:20.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-310" for this suite. • [SLOW TEST:61.173 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":41,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:20.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:06:20.112: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fc4fe207-5f30-45bd-9c59-1d18e081cc91" in namespace "security-context-test-9632" to be "Succeeded or Failed" May 24 19:06:20.114: INFO: Pod "busybox-user-65534-fc4fe207-5f30-45bd-9c59-1d18e081cc91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263836ms May 24 19:06:22.118: INFO: Pod "busybox-user-65534-fc4fe207-5f30-45bd-9c59-1d18e081cc91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006219284s May 24 19:06:22.118: INFO: Pod "busybox-user-65534-fc4fe207-5f30-45bd-9c59-1d18e081cc91" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:22.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9632" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":463,"failed":0} SS ------------------------------ [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:20.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:22.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1426" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":586,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:05.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:22.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7573" for this suite. • [SLOW TEST:16.432 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":31,"skipped":684,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:22.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:06:22.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2" in namespace "projected-891" to be "Succeeded or Failed" May 24 19:06:22.180: INFO: Pod "downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709365ms May 24 19:06:24.189: INFO: Pod "downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011461091s STEP: Saw pod success May 24 19:06:24.189: INFO: Pod "downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2" satisfied condition "Succeeded or Failed" May 24 19:06:24.193: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2 container client-container: STEP: delete the pod May 24 19:06:24.206: INFO: Waiting for pod downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2 to disappear May 24 19:06:24.209: INFO: Pod downwardapi-volume-f0c1f693-cd77-4596-8421-cbc89139d0e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:24.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-891" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":465,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:24.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:24.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9450" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":26,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:24.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting the proxy server May 24 19:06:24.392: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2809 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:24.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2809" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":27,"skipped":493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:24.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating cluster-info May 24 19:06:24.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-5287 cluster-info' May 24 19:06:24.726: INFO: stderr: "" May 24 19:06:24.726: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.13.89:44097\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:24.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5287" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":28,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:20.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-cae756d4-5cce-4c46-acbc-c3e7c6f50e4c STEP: Creating a pod to test consume secrets May 24 19:06:20.959: INFO: Waiting up to 5m0s for pod "pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01" in namespace "secrets-5977" to be "Succeeded or Failed" May 24 19:06:20.962: INFO: Pod "pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.742633ms May 24 19:06:22.965: INFO: Pod "pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01": Phase="Running", Reason="", readiness=true. Elapsed: 2.00527053s May 24 19:06:24.968: INFO: Pod "pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008875686s STEP: Saw pod success May 24 19:06:24.968: INFO: Pod "pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01" satisfied condition "Succeeded or Failed" May 24 19:06:24.972: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01 container secret-volume-test: STEP: delete the pod May 24 19:06:24.986: INFO: Waiting for pod pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01 to disappear May 24 19:06:24.989: INFO: Pod pod-secrets-2ae62d18-0ec4-4181-9dd7-eb5723353d01 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:24.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5977" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:22.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-799251b1-37c9-4ff5-9f49-64c4682dbee5 STEP: Creating configMap with name cm-test-opt-upd-95925428-097a-42f1-ba08-cb313c428f4c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-799251b1-37c9-4ff5-9f49-64c4682dbee5 STEP: Updating configmap cm-test-opt-upd-95925428-097a-42f1-ba08-cb313c428f4c STEP: Creating configMap with name cm-test-opt-create-c0fae723-0eb9-475d-8390-cb80c34eecd7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:26.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2377" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":588,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:25.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:06:25.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b" in namespace "downward-api-513" to be "Succeeded or Failed" May 24 19:06:25.138: INFO: Pod "downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.802897ms May 24 19:06:27.143: INFO: Pod "downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007593567s STEP: Saw pod success May 24 19:06:27.143: INFO: Pod "downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b" satisfied condition "Succeeded or Failed" May 24 19:06:27.147: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b container client-container: STEP: delete the pod May 24 19:06:27.161: INFO: Waiting for pod downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b to disappear May 24 19:06:27.163: INFO: Pod downwardapi-volume-ee8dae6d-4ec0-42d6-98c0-757781bd7c7b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:27.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-513" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":805,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:27.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pod templates May 24 19:06:27.210: INFO: created test-podtemplate-1 May 24 19:06:27.214: INFO: created test-podtemplate-2 May 24 19:06:27.217: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 24 19:06:27.220: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 24 19:06:27.235: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:27.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3570" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":44,"skipped":808,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod May 24 19:06:22.405: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:27.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5470" for this suite. •S ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":32,"skipped":687,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:24.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:06:24.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b" in namespace "projected-795" to be "Succeeded or Failed" May 24 19:06:24.835: INFO: Pod "downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.406437ms May 24 19:06:26.839: INFO: Pod "downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007554358s May 24 19:06:28.843: INFO: Pod "downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011901301s STEP: Saw pod success May 24 19:06:28.844: INFO: Pod "downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b" satisfied condition "Succeeded or Failed" May 24 19:06:28.846: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b container client-container: STEP: delete the pod May 24 19:06:28.860: INFO: Waiting for pod downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b to disappear May 24 19:06:28.862: INFO: Pod downwardapi-volume-83e87d65-36d0-457b-9820-743eaffd8b3b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:28.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-795" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":551,"failed":0} SSSSS ------------------------------ [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:27.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test env composition May 24 19:06:27.291: INFO: Waiting up to 5m0s for pod "var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5" in namespace "var-expansion-9387" to be "Succeeded or Failed" May 24 19:06:27.294: INFO: Pod "var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636525ms May 24 19:06:29.298: INFO: Pod "var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006557029s STEP: Saw pod success May 24 19:06:29.298: INFO: Pod "var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5" satisfied condition "Succeeded or Failed" May 24 19:06:29.301: INFO: Trying to get logs from node leguer-worker pod var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5 container dapi-container: STEP: delete the pod May 24 19:06:29.315: INFO: Waiting for pod var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5 to disappear May 24 19:06:29.319: INFO: Pod var-expansion-8bee7866-097c-42c5-a31a-a17ed3ced7c5 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:29.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9387" for this suite. • ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":814,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:26.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-c8dae876-cd92-4c88-a91e-99234d185334 STEP: Creating a pod to test consume configMaps May 24 19:06:26.457: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61" in namespace "configmap-7628" to be "Succeeded or Failed" May 24 19:06:26.460: INFO: Pod "pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.120167ms May 24 19:06:28.464: INFO: Pod "pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992292s May 24 19:06:30.468: INFO: Pod "pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010807109s STEP: Saw pod success May 24 19:06:30.468: INFO: Pod "pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61" satisfied condition "Succeeded or Failed" May 24 19:06:30.471: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61 container agnhost-container: STEP: delete the pod May 24 19:06:30.493: INFO: Waiting for pod pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61 to disappear May 24 19:06:30.495: INFO: Pod pod-configmaps-7c3de1f9-fd76-49ce-86d1-ffc5e8f91a61 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:30.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7628" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:03:59.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9799 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet May 24 19:03:59.231: INFO: Found 0 stateful pods, waiting for 3 May 24 19:04:09.235: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:09.235: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:09.235: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 24 19:04:19.236: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:19.236: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:19.236: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 24 19:04:19.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9799 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:04:19.455: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:04:19.455: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:04:19.455: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 19:04:29.487: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 24 19:04:39.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9799 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:04:39.738: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:04:39.738: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:04:39.738: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:04:50.035: INFO: Waiting for StatefulSet statefulset-9799/ss2 to complete update May 24 19:04:50.035: INFO: Waiting for Pod statefulset-9799/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 19:04:50.035: INFO: Waiting for Pod statefulset-9799/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 19:05:00.044: INFO: Waiting for StatefulSet statefulset-9799/ss2 to complete update May 24 19:05:00.044: INFO: Waiting for Pod statefulset-9799/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 24 19:05:10.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9799 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 19:05:10.242: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 24 19:05:10.242: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 19:05:10.242: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 19:05:20.279: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 24 19:05:30.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=statefulset-9799 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 19:05:30.524: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 24 19:05:30.524: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 19:05:30.524: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 19:05:40.544: INFO: Waiting for StatefulSet statefulset-9799/ss2 to complete update May 24 19:05:40.545: INFO: Waiting for Pod statefulset-9799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 24 19:05:40.545: INFO: Waiting for Pod statefulset-9799/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 24 19:05:50.552: INFO: Waiting for StatefulSet statefulset-9799/ss2 to complete update May 24 19:05:50.552: INFO: Waiting for Pod statefulset-9799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 24 19:06:00.552: INFO: Waiting for StatefulSet statefulset-9799/ss2 to complete update May 24 19:06:00.552: INFO: Waiting for Pod statefulset-9799/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 19:06:10.552: INFO: Deleting all statefulset in ns statefulset-9799 May 24 19:06:10.555: INFO: Scaling statefulset ss2 to 0 May 24 19:06:30.572: INFO: Waiting for statefulset status.replicas updated to 0 May 24 19:06:30.575: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:30.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9799" for this suite. • [SLOW TEST:151.496 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":20,"skipped":413,"failed":0} SS ------------------------------ May 24 19:06:30.600: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:30.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 19:06:30.590: INFO: Waiting up to 5m0s for pod "pod-d95a8305-6165-4188-9ada-048959990961" in namespace "emptydir-818" to be "Succeeded or Failed" May 24 19:06:30.593: INFO: Pod "pod-d95a8305-6165-4188-9ada-048959990961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.748807ms May 24 19:06:32.597: INFO: Pod "pod-d95a8305-6165-4188-9ada-048959990961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006646295s STEP: Saw pod success May 24 19:06:32.597: INFO: Pod "pod-d95a8305-6165-4188-9ada-048959990961" satisfied condition "Succeeded or Failed" May 24 19:06:32.600: INFO: Trying to get logs from node leguer-worker2 pod pod-d95a8305-6165-4188-9ada-048959990961 container test-container: STEP: delete the pod May 24 19:06:32.615: INFO: Waiting for pod pod-d95a8305-6165-4188-9ada-048959990961 to disappear May 24 19:06:32.618: INFO: Pod pod-d95a8305-6165-4188-9ada-048959990961 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:32.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-818" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":617,"failed":0} May 24 19:06:32.630: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:29.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin May 24 19:06:29.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94" in namespace "downward-api-3820" to be "Succeeded or Failed" May 24 19:06:29.398: INFO: Pod "downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14251ms May 24 19:06:31.402: INFO: Pod "downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475093s May 24 19:06:33.407: INFO: Pod "downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011146428s STEP: Saw pod success May 24 19:06:33.407: INFO: Pod "downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94" satisfied condition "Succeeded or Failed" May 24 19:06:33.410: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94 container client-container: STEP: delete the pod May 24 19:06:33.427: INFO: Waiting for pod downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94 to disappear May 24 19:06:33.430: INFO: Pod downwardapi-volume-d1aaa416-aa99-4e8f-a2c9-2a0199089c94 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:33.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3820" for this suite. • ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:27.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC May 24 19:06:27.316: INFO: namespace kubectl-2928 May 24 19:06:27.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2928 create -f -' May 24 19:06:27.685: INFO: stderr: "" May 24 19:06:27.685: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 24 19:06:28.689: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:06:28.689: INFO: Found 0 / 1 May 24 19:06:29.688: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:06:29.688: INFO: Found 0 / 1 May 24 19:06:30.689: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:06:30.689: INFO: Found 1 / 1 May 24 19:06:30.689: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 19:06:30.692: INFO: Selector matched 1 pods for map[app:agnhost] May 24 19:06:30.692: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 19:06:30.692: INFO: wait on agnhost-primary startup in kubectl-2928 May 24 19:06:30.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2928 logs agnhost-primary-hcvm6 agnhost-primary' May 24 19:06:30.826: INFO: stderr: "" May 24 19:06:30.826: INFO: stdout: "Paused\n" STEP: exposing RC May 24 19:06:30.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2928 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 24 19:06:30.976: INFO: stderr: "" May 24 19:06:30.976: INFO: stdout: "service/rm2 exposed\n" May 24 19:06:30.980: INFO: Service rm2 in namespace kubectl-2928 found. STEP: exposing service May 24 19:06:32.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-2928 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 24 19:06:33.127: INFO: stderr: "" May 24 19:06:33.127: INFO: stdout: "service/rm3 exposed\n" May 24 19:06:33.131: INFO: Service rm3 in namespace kubectl-2928 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:35.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2928" for this suite. • [SLOW TEST:7.855 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":33,"skipped":706,"failed":0} May 24 19:06:35.151: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":521,"failed":0} [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:07.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 24 19:06:11.532: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2641 PodName:var-expansion-ef0bee95-4988-4e44-b1e6-6e575eabece2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:06:11.532: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 24 19:06:11.667: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2641 PodName:var-expansion-ef0bee95-4988-4e44-b1e6-6e575eabece2 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:06:11.667: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 24 19:06:12.288: INFO: Successfully updated pod "var-expansion-ef0bee95-4988-4e44-b1e6-6e575eabece2" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 24 19:06:12.292: INFO: Deleting pod "var-expansion-ef0bee95-4988-4e44-b1e6-6e575eabece2" in namespace "var-expansion-2641" May 24 19:06:12.297: INFO: Wait up to 5m0s for pod "var-expansion-ef0bee95-4988-4e44-b1e6-6e575eabece2" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:06:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2641" for this suite. • [SLOW TEST:40.832 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":26,"skipped":521,"failed":0} May 24 19:06:48.317: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:04:24.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod with failed condition STEP: updating the pod May 24 19:06:24.940: INFO: Successfully updated pod "var-expansion-14515010-399c-4ae5-b297-a4b5eda5b31c" STEP: waiting for pod running STEP: deleting the pod gracefully May 24 19:06:26.947: INFO: Deleting pod "var-expansion-14515010-399c-4ae5-b297-a4b5eda5b31c" in namespace "var-expansion-638" May 24 19:06:26.953: INFO: Wait up to 5m0s for pod "var-expansion-14515010-399c-4ae5-b297-a4b5eda5b31c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:07:09.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-638" for this suite. • [SLOW TEST:164.661 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":20,"skipped":284,"failed":0} May 24 19:07:09.041: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:06.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-432754c7-ff0e-4003-9d79-d00147f1d484 STEP: Creating secret with name s-test-opt-upd-35e68e66-507e-442c-9fd1-612ae98b1d29 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-432754c7-ff0e-4003-9d79-d00147f1d484 STEP: Updating secret s-test-opt-upd-35e68e66-507e-442c-9fd1-612ae98b1d29 STEP: Creating secret with name s-test-opt-create-4737bfe8-b0b5-4d64-9a4a-d141fc62724c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:07:33.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-501" for this suite. • [SLOW TEST:86.744 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":299,"failed":0} May 24 19:07:33.579: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:51.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0524 19:06:31.956292 18 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:07:33.975: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. May 24 19:07:33.975: INFO: Deleting pod "simpletest.rc-25fck" in namespace "gc-945" May 24 19:07:33.984: INFO: Deleting pod "simpletest.rc-4djjd" in namespace "gc-945" May 24 19:07:34.011: INFO: Deleting pod "simpletest.rc-jzq85" in namespace "gc-945" May 24 19:07:34.026: INFO: Deleting pod "simpletest.rc-mlhbh" in namespace "gc-945" May 24 19:07:34.034: INFO: Deleting pod "simpletest.rc-pjf2f" in namespace "gc-945" May 24 19:07:34.041: INFO: Deleting pod "simpletest.rc-qlwqb" in namespace "gc-945" May 24 19:07:34.048: INFO: Deleting pod "simpletest.rc-t7hgs" in namespace "gc-945" May 24 19:07:34.054: INFO: Deleting pod "simpletest.rc-tgrnp" in namespace "gc-945" May 24 19:07:34.062: INFO: Deleting pod "simpletest.rc-vtzd5" in namespace "gc-945" May 24 19:07:34.068: INFO: Deleting pod "simpletest.rc-x5bth" in namespace "gc-945" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:07:34.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-945" for this suite. • [SLOW TEST:102.196 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":36,"skipped":552,"failed":0} May 24 19:07:34.086: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:06:28.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0524 19:06:38.950126 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 24 19:07:40.969: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:07:40.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-360" for this suite. • [SLOW TEST:72.095 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":30,"skipped":556,"failed":0} May 24 19:07:40.981: INFO: Running AfterSuite actions on all nodes [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:05:44.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-5bb402b7-b517-4be1-9386-c248bcbe615e in namespace container-probe-6284 May 24 19:05:46.459: INFO: Started pod busybox-5bb402b7-b517-4be1-9386-c248bcbe615e in namespace container-probe-6284 STEP: checking the pod's current state and verifying that restartCount is present May 24 19:05:46.463: INFO: Initial restart count of pod busybox-5bb402b7-b517-4be1-9386-c248bcbe615e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:09:47.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6284" for this suite. • [SLOW TEST:243.291 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":591,"failed":0} May 24 19:09:47.708: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":835,"failed":0} May 24 19:06:33.441: INFO: Running AfterSuite actions on all nodes May 24 19:09:47.783: INFO: Running AfterSuite actions on node 1 May 24 19:09:47.783: INFO: Skipping dumping logs from cluster Ran 291 of 5667 Specs in 624.902 seconds SUCCESS! -- 291 Passed | 0 Failed | 0 Pending | 5376 Skipped Ginkgo ran 1 suite in 10m26.681513386s Test Suite Passed